threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nSharedRecoveryState member of XLogCtl is no longer a boolean flag, got changes\nin 4e87c4836ab9 to enum but, comment referring to it still referred as the\nboolean flag which is pretty confusing and incorrect.\n\nAlso, the last part of the same comment is as:\n\n\" .. although the boolean flag to allow WAL is probably atomic in\nitself, .....\",\n\nI am a bit confused here too about saying \"atomic\" to it, is that correct?\nI haven't done anything about it, only replaced the \"boolean flag\" to \"recovery\nstate\" in the attached patch.\n\nRegards,\nAmul", "msg_date": "Wed, 3 Feb 2021 14:27:50 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Correct comment in StartupXLOG()." }, { "msg_contents": "On Wed, Feb 3, 2021 at 2:28 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> SharedRecoveryState member of XLogCtl is no longer a boolean flag, got changes\n> in 4e87c4836ab9 to enum but, comment referring to it still referred as the\n> boolean flag which is pretty confusing and incorrect.\n\n+1 for the comment change\n\n> Also, the last part of the same comment is as:\n>\n> \" .. although the boolean flag to allow WAL is probably atomic in\n> itself, .....\",\n>\n> I am a bit confused here too about saying \"atomic\" to it, is that correct?\n> I haven't done anything about it, only replaced the \"boolean flag\" to \"recovery\n> state\" in the attached patch.\n\nI don't think the atomic is correct, it's no more boolean so it is\nbetter we get rid of this part of the comment\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 14:47:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "On Wed, Feb 3, 2021 at 2:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 2:28 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > SharedRecoveryState member of XLogCtl is no longer a boolean flag, got changes\n> > in 4e87c4836ab9 to enum but, comment referring to it still referred as the\n> > boolean flag which is pretty confusing and incorrect.\n>\n> +1 for the comment change\n>\n> > Also, the last part of the same comment is as:\n> >\n> > \" .. although the boolean flag to allow WAL is probably atomic in\n> > itself, .....\",\n> >\n> > I am a bit confused here too about saying \"atomic\" to it, is that correct?\n> > I haven't done anything about it, only replaced the \"boolean flag\" to \"recovery\n> > state\" in the attached patch.\n>\n> I don't think the atomic is correct, it's no more boolean so it is\n> better we get rid of this part of the comment\n\nThanks for the confirmation. Updated that part in the attached version.\n\nRegards,\nAmul", "msg_date": "Wed, 3 Feb 2021 16:36:13 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "At Wed, 3 Feb 2021 16:36:13 +0530, Amul Sul <sulamul@gmail.com> wrote in \n> On Wed, Feb 3, 2021 at 2:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Feb 3, 2021 at 2:28 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > SharedRecoveryState member of XLogCtl is no longer a boolean flag, got changes\n> > > in 4e87c4836ab9 to enum but, comment referring to it still referred as the\n> > > boolean flag which is pretty confusing and incorrect.\n> >\n> > +1 for the comment change\n\nActually the \"flag\" has been changed to an integer (emnum), so it\nneeds a change. However, the current proposal:\n\n \t * Now allow backends to write WAL and update the control file status in\n-\t * consequence. The boolean flag allowing backends to write WAL is\n+\t * consequence. The recovery state allowing backends to write WAL is\n \t * updated while holding ControlFileLock to prevent other backends to look\n\nLooks somewhat strange. The old booean had a single task to allow\nbackends to write WAL but the current state has multple states that\ncontrols recovery progress. So I thnink it needs a further change.\n\n===\n Now allow backends to write WAL and update the control file status in\n consequence. The recovery state is updated to allow backends to write\n WAL, while holding ControlFileLock to prevent other backends to look\n at an inconsistent state of the control file in shared memory.\n===\n\n> > > Also, the last part of the same comment is as:\n> > >\n> > > \" .. although the boolean flag to allow WAL is probably atomic in\n> > > itself, .....\",\n> > >\n> > > I am a bit confused here too about saying \"atomic\" to it, is that correct?\n> > > I haven't done anything about it, only replaced the \"boolean flag\" to \"recovery\n> > > state\" in the attached patch.\n> >\n> > I don't think the atomic is correct, it's no more boolean so it is\n> > better we get rid of this part of the comment\n> \n> Thanks for the confirmation. Updated that part in the attached version.\n\nI think the original comment still holds except the data type.\n\n-\t * Also, although the boolean flag to allow WAL is probably atomic in\n-\t * itself, we use the info_lck here to ensure that there are no race\n-\t * conditions concerning visibility of other recent updates to shared\n-\t * memory.\n+\t * Also, we use the info_lck to update the recovery state to ensure that\n+\t * there are no race conditions concerning visibility of other recent\n+\t * updates to shared memory.\n\nThe type RecoveryState is int, which is of the native machine size\nthat is considered to be atomic as well as boolean. However, I don't\nobject to remove the phrase since that removal doesn't change the\npoint of the description.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 04 Feb 2021 09:43:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "On Thu, Feb 4, 2021 at 6:18 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 3 Feb 2021 16:36:13 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > On Wed, Feb 3, 2021 at 2:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Feb 3, 2021 at 2:28 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > SharedRecoveryState member of XLogCtl is no longer a boolean flag, got changes\n> > > > in 4e87c4836ab9 to enum but, comment referring to it still referred as the\n> > > > boolean flag which is pretty confusing and incorrect.\n> > >\n> > > +1 for the comment change\n>\n> Actually the \"flag\" has been changed to an integer (emnum), so it\n> needs a change. However, the current proposal:\n>\n> * Now allow backends to write WAL and update the control file status in\n> - * consequence. The boolean flag allowing backends to write WAL is\n> + * consequence. The recovery state allowing backends to write WAL is\n> * updated while holding ControlFileLock to prevent other backends to look\n>\n> Looks somewhat strange. The old booean had a single task to allow\n> backends to write WAL but the current state has multple states that\n> controls recovery progress. So I thnink it needs a further change.\n>\n> ===\n> Now allow backends to write WAL and update the control file status in\n> consequence. The recovery state is updated to allow backends to write\n> WAL, while holding ControlFileLock to prevent other backends to look\n> at an inconsistent state of the control file in shared memory.\n> ===\n>\n\nThis looks more accurate, added the same in the attached version. Thanks for the\ncorrection.\n\nRegards,\nAmul", "msg_date": "Thu, 4 Feb 2021 09:39:22 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "On Thu, Feb 4, 2021 at 9:39 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Thu, Feb 4, 2021 at 6:18 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 3 Feb 2021 16:36:13 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > > On Wed, Feb 3, 2021 at 2:48 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Feb 3, 2021 at 2:28 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > SharedRecoveryState member of XLogCtl is no longer a boolean flag, got changes\n> > > > > in 4e87c4836ab9 to enum but, comment referring to it still referred as the\n> > > > > boolean flag which is pretty confusing and incorrect.\n> > > >\n> > > > +1 for the comment change\n> >\n> > Actually the \"flag\" has been changed to an integer (emnum), so it\n> > needs a change. However, the current proposal:\n> >\n> > * Now allow backends to write WAL and update the control file status in\n> > - * consequence. The boolean flag allowing backends to write WAL is\n> > + * consequence. The recovery state allowing backends to write WAL is\n> > * updated while holding ControlFileLock to prevent other backends to look\n> >\n> > Looks somewhat strange. The old booean had a single task to allow\n> > backends to write WAL but the current state has multple states that\n> > controls recovery progress. So I thnink it needs a further change.\n> >\n> > ===\n> > Now allow backends to write WAL and update the control file status in\n> > consequence. The recovery state is updated to allow backends to write\n> > WAL, while holding ControlFileLock to prevent other backends to look\n> > at an inconsistent state of the control file in shared memory.\n> > ===\n> >\n>\n> This looks more accurate, added the same in the attached version. Thanks for the\n> correction.\n\nLooks good to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 12:58:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "On Thu, Feb 04, 2021 at 12:58:29PM +0530, Dilip Kumar wrote:\n> Looks good to me.\n\nRather than using the term \"recovery state\", I would just use\nSharedRecoveryState. This leads me to the attached.\n--\nMichael", "msg_date": "Fri, 5 Feb 2021 15:23:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "On Fri, Feb 5, 2021 at 11:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Feb 04, 2021 at 12:58:29PM +0530, Dilip Kumar wrote:\n> > Looks good to me.\n>\n> Rather than using the term \"recovery state\", I would just use\n> SharedRecoveryState. This leads me to the attached.\n\nAlright, that too looks good. Thank you !\n\nRegards,\nAmul\n\n\n", "msg_date": "Fri, 5 Feb 2021 14:42:57 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Correct comment in StartupXLOG()." }, { "msg_contents": "On Fri, Feb 05, 2021 at 02:42:57PM +0530, Amul Sul wrote:\n> Alright, that too looks good. Thank you !\n\nThanks, Amul. I have applied this one.\n--\nMichael", "msg_date": "Sat, 6 Feb 2021 10:36:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Correct comment in StartupXLOG()." } ]
[ { "msg_contents": "Hi Hackers.\n\nAs discovered in another thread [master] there is an *existing* bug in\nthe PG HEAD code which can happen if a DROP TABLE is done at same time\na replication tablesync worker is running.\n\nIt seems the table's relid that the sync worker is using gets ripped\nout from underneath it and is invalidated by the DROP TABLE. Any\nsubsequent use of that relid will go wrong. In the particular test\ncase which found this, the result was a stack trace when a LOG message\ntried to display the table name of the bad relid.\n\nPSA the patch code to fix this. The patch disallows DROP TABLE while\nany associated tablesync worker is still running. This fix was already\nconfirmed OK in the other thread [v25]\n\n----\n[master] https://www.postgresql.org/message-id/CAHut%2BPtSO4WsZwx8z%3D%2BYp_OWpxFmmFi5WX6OmYJzULNa2NV89g%40mail.gmail.com\n[v25] https://www.postgresql.org/message-id/CAHut%2BPtAKP1FoHbUEWN%2Ba%3D8Pg_njsJKc9Zoz05A_ewJSvjX2MQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 3 Feb 2021 20:23:20 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "DROP TABLE can crash the replication sync worker" }, { "msg_contents": "On Wed, Feb 3, 2021 at 2:53 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi Hackers.\n>\n> As discovered in another thread [master] there is an *existing* bug in\n> the PG HEAD code which can happen if a DROP TABLE is done at same time\n> a replication tablesync worker is running.\n>\n> It seems the table's relid that the sync worker is using gets ripped\n> out from underneath it and is invalidated by the DROP TABLE. Any\n> subsequent use of that relid will go wrong.\n>\n\nWhere exactly did you pause the tablesync worker while dropping the\ntable? We acquire the lock on the table in LogicalRepSyncTableStart\nand then keep it for the entire duration of tablesync worker so drop\ntable shouldn't be allowed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Feb 2021 18:19:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP TABLE can crash the replication sync worker" }, { "msg_contents": "On Wed, Feb 3, 2021 at 11:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 2:53 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi Hackers.\n> >\n> > As discovered in another thread [master] there is an *existing* bug in\n> > the PG HEAD code which can happen if a DROP TABLE is done at same time\n> > a replication tablesync worker is running.\n> >\n> > It seems the table's relid that the sync worker is using gets ripped\n> > out from underneath it and is invalidated by the DROP TABLE. Any\n> > subsequent use of that relid will go wrong.\n> >\n>\n> Where exactly did you pause the tablesync worker while dropping the\n> table? We acquire the lock on the table in LogicalRepSyncTableStart\n> and then keep it for the entire duration of tablesync worker so drop\n> table shouldn't be allowed.\n>\n\nI have a breakpoint set on LogicalRepSyncTableStart. The DROP TABLE is\ndone while paused on that breakpoint, so no code of\nLogicalRepSyncTableStart has even executed yet.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 4 Feb 2021 11:01:17 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: DROP TABLE can crash the replication sync worker" }, { "msg_contents": "On Thu, Feb 4, 2021 at 5:31 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Feb 3, 2021 at 11:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Feb 3, 2021 at 2:53 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi Hackers.\n> > >\n> > > As discovered in another thread [master] there is an *existing* bug in\n> > > the PG HEAD code which can happen if a DROP TABLE is done at same time\n> > > a replication tablesync worker is running.\n> > >\n> > > It seems the table's relid that the sync worker is using gets ripped\n> > > out from underneath it and is invalidated by the DROP TABLE. Any\n> > > subsequent use of that relid will go wrong.\n> > >\n> >\n> > Where exactly did you pause the tablesync worker while dropping the\n> > table? We acquire the lock on the table in LogicalRepSyncTableStart\n> > and then keep it for the entire duration of tablesync worker so drop\n> > table shouldn't be allowed.\n> >\n>\n> I have a breakpoint set on LogicalRepSyncTableStart. The DROP TABLE is\n> done while paused on that breakpoint, so no code of\n> LogicalRepSyncTableStart has even executed yet.\n>\n\nFair enough. So, you are hitting this problem in finish_sync_worker()\nwhile logging the message because by that time the relation is\ndropped. I think it is good to fix that but we don't want the patch\nyou have attached here, we can fix it locally in finish_sync_worker()\nby constructing a different message (something like: \"logical\nreplication table synchronization worker for subscription \\\"%s\\\" has\nfinished\") when we can't get rel_name from rel id. This doesn't appear\nto be as serious a problem as we were talking about in the patch\n\"Allow multiple xacts during table sync in logical replication.\" [1]\nbecause there we don't hold the lock on the table for the entire\nduration tablesync. So, even if we want to fix this problem, it would\nbe more appropriate for back-branches if we push the patch [1].\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPtAKP1FoHbUEWN%2Ba%3D8Pg_njsJKc9Zoz05A_ewJSvjX2MQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Feb 2021 17:03:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP TABLE can crash the replication sync worker" } ]
[ { "msg_contents": "Hi,\n\nWhile playing with COPY FROM refactorings in another thread, I noticed \ncorner case where I think backslash escaping doesn't work correctly. \nConsider the following input:\n\n\\么.foo\n\nI hope that came through in this email correctly as UTF-8. The string \ncontains a sequence of: backslash, multibyte-character and a dot.\n\nThe documentation says:\n\n> Backslash characters (\\) can be used in the COPY data to quote data\n> characters that might otherwise be taken as row or column delimiters\n\nSo I believe escaping multi-byte characters is supposed to work, and it \nusually does.\n\nHowever, let's consider the same string in Big5 encoding (in hex escaped \nformat):\n\n\\x5ca45c2e666f6f\n\nThe first byte 0x5c, is the backslash. The multi-byte character consists \nof two bytes: 0xa4 0x5c. Note that the second byte is equal to a backslash.\n\nThat confuses the parser in CopyReadLineText, so that you get an error:\n\npostgres=# create table copytest (t text);\nCREATE TABLE\npostgres=# \\copy copytest from 'big5-skip-test.data' with (encoding 'big5');\nERROR: end-of-copy marker corrupt\nCONTEXT: COPY copytest, line 1\n\nWhat happens is that when the parser sees the backslash, it looks ahead \nat the next byte, and when it's not a dot, it skips over it:\n\n> \t\t\telse if (!cstate->opts.csv_mode)\n> \n> \t\t\t\t/*\n> \t\t\t\t * If we are here, it means we found a backslash followed by\n> \t\t\t\t * something other than a period. In non-CSV mode, anything\n> \t\t\t\t * after a backslash is special, so we skip over that second\n> \t\t\t\t * character too. If we didn't do that \\\\. would be\n> \t\t\t\t * considered an eof-of copy, while in non-CSV mode it is a\n> \t\t\t\t * literal backslash followed by a period. In CSV mode,\n> \t\t\t\t * backslashes are not special, so we want to process the\n> \t\t\t\t * character after the backslash just like a normal character,\n> \t\t\t\t * so we don't increment in those cases.\n> \t\t\t\t */\n> \t\t\t\traw_buf_ptr++;\n\nHowever, in a multi-byte encoding that might \"embed\" ascii characters, \nit should skip over the next *character*, not byte.\n\nAttached is a pretty straightforward patch to fix that. Anyone see a \nproblem with this?\n\n- Heikki", "msg_date": "Wed, 3 Feb 2021 14:08:37 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Bug in COPY FROM backslash escaping multi-byte chars" }, { "msg_contents": "On Wed, Feb 3, 2021 at 8:08 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> Hi,\n>\n> While playing with COPY FROM refactorings in another thread, I noticed\n> corner case where I think backslash escaping doesn't work correctly.\n> Consider the following input:\n>\n> \\么.foo\n\nI've seen multibyte delimiters in the wild, so it's not as outlandish as it\nseems. The fix is simple enough, so +1.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Feb 3, 2021 at 8:08 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> Hi,>> While playing with COPY FROM refactorings in another thread, I noticed> corner case where I think backslash escaping doesn't work correctly.> Consider the following input:>> \\么.fooI've seen multibyte delimiters in the wild, so it's not as outlandish as it seems. The fix is simple enough, so +1.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Feb 2021 09:38:11 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Bug in COPY FROM backslash escaping multi-byte chars" }, { "msg_contents": "On 03/02/2021 15:38, John Naylor wrote:\n> On Wed, Feb 3, 2021 at 8:08 AM Heikki Linnakangas <hlinnaka@iki.fi \n> <mailto:hlinnaka@iki.fi>> wrote:\n> >\n> > Hi,\n> >\n> > While playing with COPY FROM refactorings in another thread, I noticed\n> > corner case where I think backslash escaping doesn't work correctly.\n> > Consider the following input:\n> >\n> > \\么.foo\n> \n> I've seen multibyte delimiters in the wild, so it's not as outlandish as \n> it seems.\n\nWe don't actually support multi-byte characters as delimiters or quote \nor escape characters:\n\npostgres=# copy copytest from 'foo' with (delimiter '么');\nERROR: COPY delimiter must be a single one-byte character\n\n> The fix is simple enough, so +1.\n\nThanks, I'll commit and backpatch shortly.\n\n- Heikki\n\n\n", "msg_date": "Wed, 3 Feb 2021 15:46:30 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Bug in COPY FROM backslash escaping multi-byte chars" }, { "msg_contents": "At Wed, 3 Feb 2021 15:46:30 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in \r\n> On 03/02/2021 15:38, John Naylor wrote:\r\n> > On Wed, Feb 3, 2021 at 8:08 AM Heikki Linnakangas <hlinnaka@iki.fi\r\n> > <mailto:hlinnaka@iki.fi>> wrote:\r\n> > >\r\n> > > Hi,\r\n> > >\r\n> > > While playing with COPY FROM refactorings in another thread, I noticed\r\n> > > corner case where I think backslash escaping doesn't work correctly.\r\n> > > Consider the following input:\r\n> > >\r\n> > > \\么.foo\r\n> > I've seen multibyte delimiters in the wild, so it's not as outlandish\r\n> > as it seems.\r\n> \r\n> We don't actually support multi-byte characters as delimiters or quote\r\n> or escape characters:\r\n> \r\n> postgres=# copy copytest from 'foo' with (delimiter '么');\r\n> ERROR: COPY delimiter must be a single one-byte character\r\n> \r\n> > The fix is simple enough, so +1.\r\n> \r\n> Thanks, I'll commit and backpatch shortly.\r\n\r\nI'm not sure the assumption in the second hunk always holds, but\r\nthat's fine at least with Shift-JIS and -2004 since they are two-byte\r\nencoding.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Thu, 04 Feb 2021 10:50:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug in COPY FROM backslash escaping multi-byte chars" }, { "msg_contents": "On 04/02/2021 03:50, Kyotaro Horiguchi wrote:\n> At Wed, 3 Feb 2021 15:46:30 +0200, Heikki Linnakangas <hlinnaka@iki.fi> wrote in\n>> On 03/02/2021 15:38, John Naylor wrote:\n>>> On Wed, Feb 3, 2021 at 8:08 AM Heikki Linnakangas <hlinnaka@iki.fi\n>>> <mailto:hlinnaka@iki.fi>> wrote:\n>>> >\n>>> > Hi,\n>>> >\n>>> > While playing with COPY FROM refactorings in another thread, I noticed\n>>> > corner case where I think backslash escaping doesn't work correctly.\n>>> > Consider the following input:\n>>> >\n>>> > \\么.foo\n>>> I've seen multibyte delimiters in the wild, so it's not as outlandish\n>>> as it seems.\n>>\n>> We don't actually support multi-byte characters as delimiters or quote\n>> or escape characters:\n>>\n>> postgres=# copy copytest from 'foo' with (delimiter '么');\n>> ERROR: COPY delimiter must be a single one-byte character\n>>\n>>> The fix is simple enough, so +1.\n>>\n>> Thanks, I'll commit and backpatch shortly.\n> \n> I'm not sure the assumption in the second hunk always holds, but\n> that's fine at least with Shift-JIS and -2004 since they are two-byte\n> encoding.\n\nThe assumption is that a multi-byte character cannot have a special \nmeaning, as far as the loop in CopyReadLineText is concerned. The \ncharacters with special meaning are '\\\\', '\\n' and '\\r'. That hold \nregardless of encoding.\n\nThinking about this a bit more, I think the attached patch is slightly \nbetter. Normally in the loop, raw_buf_ptr points to the next byte to \nconsume, and 'c' is the last consumed byte. At the end of the loop, we \ncheck 'c' to see if it was a multi-byte character, and skip its 2nd, 3rd \nand 4th byte if necessary. The crux of the bug is that after the \n\"raw_buf_ptr++;\" to skip the character after the backslash, we left c to \n'\\\\', even though we already consumed the first byte of the next \ncharacter. Because of that, the end-of-the-loop check didn't correctly \ntreat it as a multi-byte character. So a more straightforward fix is to \nset 'c' to the byte we skipped over.\n\n- Heikki", "msg_date": "Thu, 4 Feb 2021 21:37:54 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Bug in COPY FROM backslash escaping multi-byte chars" } ]
[ { "msg_contents": "Something that came out of work on pg_dump recently. I added const \ndecorations to the *info arguments of the dump* functions, to clarify \nthat they don't modify that argument. Many other nearby functions \nmodify their arguments, so this can help clarify these different APIs a bit.", "msg_date": "Wed, 3 Feb 2021 13:10:51 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "pg_dump: Add const decorations" } ]
[ { "msg_contents": "Hi,\n\nHas anybody seen this:\n\n$ psql regression\nTiming is on.\npsql (14devel)\nType \"help\" for help.\n\nregression=# set force_parallel_mode to on;\nSET\nTime: 0.888 ms\nregression=# set jit to on;\nSET\nTime: 0.487 ms\nregression=# set jit_above_cost to 1;\nSET\nTime: 0.476 ms\nregression=# SELECT p1.f1, p2.f1, p1.f1 * p2.f1 FROM POINT_TBL p1,\nPOINT_TBL p2 WHERE p1.f1[0] < 1;\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nERROR: value out of range: underflow\nCONTEXT: parallel worker\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nTime: 670.801 ms\n!?> \\q\n\n(gdb) bt\n#0 0x00007f57ac6508cd in std::_Function_handler<void (unsigned long,\nllvm::object::ObjectFile const&),\nllvm::OrcCBindingsStack::OrcCBindingsStack(llvm::TargetMachine&,\nstd::function<std::unique_ptr<llvm::orc::IndirectStubsManager,\nstd::default_delete<llvm::orc::IndirectStubsManager> >\n()>)::{lambda(unsigned long, llvm::object::ObjectFile\nconst&)#3}>::_M_invoke(std::_Any_data const&, unsigned long,\nllvm::object::ObjectFile const&) () from\n/opt/rh/llvm-toolset-7.0/root/usr/lib64/libLLVM-7.so\n#1 0x00007f57ac652578 in\nllvm::orc::RTDyldObjectLinkingLayer::ConcreteLinkedObject<std::shared_ptr<llvm::RuntimeDyld::MemoryManager>\n>::~ConcreteLinkedObject() () from\n/opt/rh/llvm-toolset-7.0/root/usr/lib64/libLLVM-7.so\n#2 0x00007f57ac6527aa in std::_Rb_tree<unsigned long,\nstd::pair<unsigned long const,\nstd::unique_ptr<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject,\nstd::default_delete<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject>\n> >, std::_Select1st<std::pair<unsigned long const,\nstd::unique_ptr<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject,\nstd::default_delete<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject>\n> > >, std::less<unsigned long>, std::allocator<std::pair<unsigned\nlong const, std::unique_ptr<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject,\nstd::default_delete<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject>\n> > > >::_M_erase(std::_Rb_tree_node<std::pair<unsigned long const,\nstd::unique_ptr<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject,\nstd::default_delete<llvm::orc::RTDyldObjectLinkingLayerBase::LinkedObject>\n> > >*) () from /opt/rh/llvm-toolset-7.0/root/usr/lib64/libLLVM-7.so\n#3 0x00007f57ac65ec91 in\nllvm::OrcCBindingsStack::~OrcCBindingsStack() () from\n/opt/rh/llvm-toolset-7.0/root/usr/lib64/libLLVM-7.so\n#4 0x00007f57ac65efaa in LLVMOrcDisposeInstance () from\n/opt/rh/llvm-toolset-7.0/root/usr/lib64/libLLVM-7.so\n#5 0x00007f57ae62d7bf in llvm_shutdown (code=1, arg=0) at llvmjit.c:926\n#6 0x0000000000916d00 in proc_exit_prepare (code=1) at ipc.c:209\n#7 0x0000000000916bdb in proc_exit (code=1) at ipc.c:107\n#8 0x000000000087e8d6 in StartBackgroundWorker () at bgworker.c:832\n#9 0x0000000000892fa7 in do_start_bgworker (rw=0x2dae8a0) at postmaster.c:5833\n#10 0x0000000000893355 in maybe_start_bgworkers () at postmaster.c:6058\n#11 0x0000000000892390 in sigusr1_handler (postgres_signal_arg=10) at\npostmaster.c:5215\n#12 <signal handler called>\n#13 0x00007f57d4e20933 in __select_nocancel () from /lib64/libc.so.6\n#14 0x000000000088e00e in ServerLoop () at postmaster.c:1694\n#15 0x000000000088d9fd in PostmasterMain (argc=5, argv=0x2d863f0) at\npostmaster.c:1402\n#16 0x0000000000791197 in main (argc=5, argv=0x2d863f0) at main.c:209\n(gdb)\n\nI can make this happen with PG > v12. Maybe there's something wrong\nwith my LLVM installation but just thought to ask here just in case.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 3 Feb 2021 23:12:14 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "a curious case of force_parallel_mode = on with jit'ing" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Has anybody seen this:\n\nWorks for me on HEAD (using RHEL8.3, gcc 8.3.1, LLVM 10.0.1).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 11:08:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: a curious case of force_parallel_mode = on with jit'ing" }, { "msg_contents": "On Thu, Feb 4, 2021 at 1:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Has anybody seen this:\n>\n> Works for me on HEAD (using RHEL8.3, gcc 8.3.1, LLVM 10.0.1).\n\nThanks for checking. Must be my LLVM setup I guess:\n\n$ llvm-config --version\n7.0.1\n$ cat /etc/redhat-release\nCentOS Linux release 7.7.1908 (Core)\n$ gcc --version\ngcc (GCC) 9.1.1 20190605 (Red Hat 9.1.1-2)\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 13:10:04 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: a curious case of force_parallel_mode = on with jit'ing" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Feb 4, 2021 at 1:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Works for me on HEAD (using RHEL8.3, gcc 8.3.1, LLVM 10.0.1).\n\n> Thanks for checking. Must be my LLVM setup I guess:\n\n> $ llvm-config --version\n> 7.0.1\n> $ cat /etc/redhat-release\n> CentOS Linux release 7.7.1908 (Core)\n> $ gcc --version\n> gcc (GCC) 9.1.1 20190605 (Red Hat 9.1.1-2)\n\nHmmm ... seems like an odd combination to have a newer gcc and an\nolder LLVM than what RHEL8 is shipping. Is this really the current\nrecommendation on CentOS 7?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 23:41:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: a curious case of force_parallel_mode = on with jit'ing" }, { "msg_contents": "On Thu, Feb 4, 2021 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Feb 4, 2021 at 1:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Works for me on HEAD (using RHEL8.3, gcc 8.3.1, LLVM 10.0.1).\n>\n> > Thanks for checking. Must be my LLVM setup I guess:\n>\n> > $ llvm-config --version\n> > 7.0.1\n> > $ cat /etc/redhat-release\n> > CentOS Linux release 7.7.1908 (Core)\n> > $ gcc --version\n> > gcc (GCC) 9.1.1 20190605 (Red Hat 9.1.1-2)\n>\n> Hmmm ... seems like an odd combination to have a newer gcc and an\n> older LLVM than what RHEL8 is shipping. Is this really the current\n> recommendation on CentOS 7?\n\nNot an official combination. At some point last year I decided to\ninstall a more modern gcc than what CentOS 7 officially provides and\nended up getting them through a Software Collections (scl) package\ncalled devtoolset-9.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 14:45:34 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: a curious case of force_parallel_mode = on with jit'ing" } ]
[ { "msg_contents": "Hi,\n\nThe server still supports the old protocol version 2. Protocol version 3 \nwas introduced in PostgreSQL 7.4, so there shouldn't be many clients \naround anymore that don't support it.\n\nCOPY FROM STDIN is particularly problematic with the old protocol, \nbecause the end-of-copy can only be detected by the \\. marker. So the \nserver has to read the input one byte at a time, and check for \\. as it \ngoes. At [1], I'm working on a patch to change the way the encoding \nconversion is performed in COPY FROM, so that we convert the data in \nlarger chunks, before scanning the input for line boundaries. We can't \ndo that safely in the old protocol.\n\nI propose that we remove server support for COPY FROM STDIN with \nprotocol version 2, per attached patch. Even if we could still support \nit, it would be a very rarely used and tested codepath, prone to bugs. \nPerhaps we could remove support for the old protocol altogether, but I'm \nnot proposing that we go that far just yet.\n\n[1] \nhttps://www.postgresql.org/message-id/e7861509-3960-538a-9025-b75a61188e01%40iki.fi\n\n- Heikki", "msg_date": "Wed, 3 Feb 2021 17:43:47 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> I propose that we remove server support for COPY FROM STDIN with \n> protocol version 2, per attached patch. Even if we could still support \n> it, it would be a very rarely used and tested codepath, prone to bugs. \n> Perhaps we could remove support for the old protocol altogether, but I'm \n> not proposing that we go that far just yet.\n\nI'm not really on board with half-baked removal of protocol 2.\nIf we're going to kill it we should just kill it altogether.\n(The argument that it's untested surely applies to the rest\nof the P2 code as well.)\n\nI have a vague recollection that JDBC users still like to use\nprotocol 2 for some reason --- is that out of date?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 11:00:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 2021-Feb-03, Tom Lane wrote:\n\n> I have a vague recollection that JDBC users still like to use\n> protocol 2 for some reason --- is that out of date?\n\n2016:\n\ncommit c3d8571e53cc5b702dae2f832b02c872ad44c3b7\nAuthor: Vladimir Sitnikov <sitnikov.vladimir@gmail.com>\nAuthorDate: Sat Aug 6 12:22:17 2016 +0300\nCommitDate: Sat Aug 13 11:27:16 2016 +0300\n\n fix: support cases when user-provided queries have 'returning'\n \n This change includes: drop v2 protocol support, and query parsing refactoring.\n Currently query parse cache is still per-connection, however \"returningColumNames\"\n are part of cache key, thus the parse cache can be made global.\n \n This fixes #488 (see org.postgresql.test.jdbc3.GeneratedKeysTest)\n\nThis commit does remove all files in\npgjdbc/src/main/java/org/postgresql/core/v2/, leaving only \"v3/\".\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Those who use electric razors are infidels destined to burn in hell while\nwe drink from rivers of beer, download free vids and mingle with naked\nwell shaved babes.\" (http://slashdot.org/comments.pl?sid=44793&cid=4647152)\n\n\n", "msg_date": "Wed, 3 Feb 2021 13:09:58 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Feb-03, Tom Lane wrote:\n>> I have a vague recollection that JDBC users still like to use\n>> protocol 2 for some reason --- is that out of date?\n\n> [ yes, since 2016 ]\n\nThen let's kill it dead, server and libpq both.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 11:29:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 03/02/2021 18:29, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> On 2021-Feb-03, Tom Lane wrote:\n>>> I have a vague recollection that JDBC users still like to use\n>>> protocol 2 for some reason --- is that out of date?\n> \n>> [ yes, since 2016 ]\n> \n> Then let's kill it dead, server and libpq both.\n\nOk, works for me. I'll prepare a larger patch to do that.\n\nSince we're on a removal-spree, it'd also be nice to get rid of the \n\"fast-path\" function call interface, PQfn(). However, libpq is using it \ninternally in the lo_*() functions, so if we remove it from the server, \nlo_*() will stop working with old libpq versions. It would be good to \nchange those functions now to use PQexecParams() instead, so that we \ncould remove the fast-path server support in the future.\n\n- Heikki\n\n\n", "msg_date": "Wed, 3 Feb 2021 18:47:05 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Since we're on a removal-spree, it'd also be nice to get rid of the \n> \"fast-path\" function call interface, PQfn(). However, libpq is using it \n> internally in the lo_*() functions, so if we remove it from the server, \n> lo_*() will stop working with old libpq versions. It would be good to \n> change those functions now to use PQexecParams() instead, so that we \n> could remove the fast-path server support in the future.\n\nI'm disinclined to touch that. It is considered part of protocol v3,\nand there is no very good reason to suppose that nothing but libpq\nis using it. Besides, what would it really save? fastpath.c has\nnot been a source of maintenance problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Feb 2021 12:53:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On Wed, Feb 03, 2021 at 11:29:37AM -0500, Tom Lane wrote:\n> Then let's kill it dead, server and libpq both.\n\nYeah.\n--\nMichael", "msg_date": "Thu, 4 Feb 2021 15:54:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 04/02/2021 08:54, Michael Paquier wrote:\n> On Wed, Feb 03, 2021 at 11:29:37AM -0500, Tom Lane wrote:\n>> Then let's kill it dead, server and libpq both.\n> \n> Yeah.\n\nOk, here we go.\n\nOne interesting thing I noticed while doing this:\n\nUp until now, we always used the old protocol for errors that happened \nearly in backend startup, before we processed the client's protocol \nversion and set the FrontendProtocol variable. I'm sure that made sense \nwhen V3 was introduced, but it was a surprise to me, and I didn't find \nthat documented anywhere. I changed it so that we use V3 errors, if \nFrontendProtocol is not yet set.\n\nHowever, I kept rudimentary support for sending errors in protocol \nversion 2. This way, if a client tries to connect with an old client, we \nstill send the \"unsupported frontend protocol\" error in the old format. \nLikewise, I kept the code in libpq to understand v2 ErrorResponse \nmessages during authentication.\n\n- Heikki", "msg_date": "Thu, 4 Feb 2021 13:00:05 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 2021-Feb-04, Heikki Linnakangas wrote:\n\n> On 04/02/2021 08:54, Michael Paquier wrote:\n> > On Wed, Feb 03, 2021 at 11:29:37AM -0500, Tom Lane wrote:\n> > > Then let's kill it dead, server and libpq both.\n> > \n> > Yeah.\n> \n> Ok, here we go.\n\nAre you going to bump the .so version for this? I think that should be\ndone, since some functions disappear and there are struct changes. It\nis curious, though, to see that exports.txt needs no changes.\n\n(I'm not sure what's our protocol for so-version changes. Do we wait\ntill end of cycle, or do we put it together with the commit that\nmodifies the library? src/tools/RELEASE_CHANGES doesn't say)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 4 Feb 2021 12:05:42 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Feb-04, Heikki Linnakangas wrote:\n>> Ok, here we go.\n\n> Are you going to bump the .so version for this? I think that should be\n> done, since some functions disappear and there are struct changes. It\n> is curious, though, to see that exports.txt needs no changes.\n\nUh, what? There should be no externally visible ABI changes in libpq\n(he says without having read the patch). If there's a need for a library\nmajor version bump, that'd be sufficient reason not to do this IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Feb 2021 10:21:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 2021-Feb-04, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Feb-04, Heikki Linnakangas wrote:\n> >> Ok, here we go.\n> \n> > Are you going to bump the .so version for this? I think that should be\n> > done, since some functions disappear and there are struct changes. It\n> > is curious, though, to see that exports.txt needs no changes.\n> \n> Uh, what? There should be no externally visible ABI changes in libpq\n> (he says without having read the patch). If there's a need for a library\n> major version bump, that'd be sufficient reason not to do this IMO.\n\nYeah, the changes I was thinking about are all in libpq-int.h so that's\nnot really a problem. But one enum in libpq-fe.h renumbers values, and\nI think it's better to keep the old value labelled as \"unused\" to avoid\nany changes.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Thu, 4 Feb 2021 12:25:48 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Yeah, the changes I was thinking about are all in libpq-int.h so that's\n> not really a problem. But one enum in libpq-fe.h renumbers values, and\n> I think it's better to keep the old value labelled as \"unused\" to avoid\n> any changes.\n\nOh, yeah, can't do that. libpq-fe.h probably shouldn't change at all;\nbut certainly we can't renumber existing enum values there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Feb 2021 10:35:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 04/02/2021 17:35, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> Yeah, the changes I was thinking about are all in libpq-int.h so that's\n>> not really a problem. But one enum in libpq-fe.h renumbers values, and\n>> I think it's better to keep the old value labelled as \"unused\" to avoid\n>> any changes.\n> \n> Oh, yeah, can't do that. libpq-fe.h probably shouldn't change at all;\n> but certainly we can't renumber existing enum values there.\n\nAh, right, there's even a comment above the enum that says that's a no \nno. But yeah, fixing that, I see no need for .so version bump.\n\n- Heikki\n\n\n", "msg_date": "Thu, 4 Feb 2021 17:47:05 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On Thu, Feb 4, 2021 at 11:47 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 04/02/2021 17:35, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> >> Yeah, the changes I was thinking about are all in libpq-int.h so that's\n> >> not really a problem. But one enum in libpq-fe.h renumbers values, and\n> >> I think it's better to keep the old value labelled as \"unused\" to avoid\n> >> any changes.\n> >\n> > Oh, yeah, can't do that. libpq-fe.h probably shouldn't change at all;\n> > but certainly we can't renumber existing enum values there.\n>\n> Ah, right, there's even a comment above the enum that says that's a no\n> no. But yeah, fixing that, I see no need for .so version bump.\n\nI was able to build libpq and psql on 7.3 with the tooling found on RHEL 7\n(the rest of the tree refused to build, but that's not relevant here) and\ngot the expected message when trying to connect:\n\nmaster:\nWelcome to psql 7.3.21, the PostgreSQL interactive terminal.\n\npatch:\npsql: FATAL: unsupported frontend protocol 2.0: server supports 3.0 to 3.0\n\nI couldn't find any traces of version 2 in the tree with the patch applied.\nThe enum mentioned above seems the only issue that needs to be fixed before\ncommit.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Feb 4, 2021 at 11:47 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> On 04/02/2021 17:35, Tom Lane wrote:> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:> >> Yeah, the changes I was thinking about are all in libpq-int.h so that's> >> not really a problem.  But one enum in libpq-fe.h renumbers values, and> >> I think it's better to keep the old value labelled as \"unused\" to avoid> >> any changes.> >> > Oh, yeah, can't do that.  libpq-fe.h probably shouldn't change at all;> > but certainly we can't renumber existing enum values there.>> Ah, right, there's even a comment above the enum that says that's a no> no. But yeah, fixing that, I see no need for .so version bump.I was able to build libpq and psql on 7.3 with the tooling found on RHEL 7 (the rest of the tree refused to build, but that's not relevant here) and got the expected message when trying to connect:master:Welcome to psql 7.3.21, the PostgreSQL interactive terminal.patch:psql: FATAL:  unsupported frontend protocol 2.0: server supports 3.0 to 3.0I couldn't find any traces of version 2 in the tree with the patch applied. The enum mentioned above seems the only issue that needs to be fixed before commit. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Feb 2021 12:33:23 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 04/02/2021 08:54, Michael Paquier wrote:\n>> On Wed, Feb 03, 2021 at 11:29:37AM -0500, Tom Lane wrote:\n>>> Then let's kill it dead, server and libpq both.\n\n> Ok, here we go.\n\n> One interesting thing I noticed while doing this:\n\n> Up until now, we always used the old protocol for errors that happened \n> early in backend startup, before we processed the client's protocol \n> version and set the FrontendProtocol variable. I'm sure that made sense \n> when V3 was introduced, but it was a surprise to me, and I didn't find \n> that documented anywhere. I changed it so that we use V3 errors, if \n> FrontendProtocol is not yet set.\n\n> However, I kept rudimentary support for sending errors in protocol \n> version 2. This way, if a client tries to connect with an old client, we \n> still send the \"unsupported frontend protocol\" error in the old format. \n> Likewise, I kept the code in libpq to understand v2 ErrorResponse \n> messages during authentication.\n\nYeah, we clearly need to send the \"unsupported frontend protocol\" error\nin as old a protocol as we can. Another point here is that if the\npostmaster fails to fork() a child process, it has a hack to spit out\nan error message without using backend/libpq at all, and that sends\nin 2.0 protocol. IIRC that's partly because it's simpler, as well\nas backward-friendly. So we should keep these vestiges.\n\nI rebased the 0001 patch (it'd bit-rotted slightly), read it over,\nand did some light testing. I found a couple of other places where\nwe could drop code: any client-side code that has to act differently\nfor pre-7.4 servers can lose that option, because it'll never be\ntalking to one of those now.\n\nPatched psql, trying to connect to a 7.3 server, reports this:\n\n$ psql -h ...\npsql: error: connection to server at \"sss2\" (192.168.1.3), port 5432 failed: FATAL: unsupported frontend protocol\n\n$\n\nConversely, 7.3 psql trying to connect to a patched server reports:\n\n$ psql -h ...\npsql: FATAL: unsupported frontend protocol 2.0: server supports 3.0 to 3.0\n\n$\n\nI'm not sure where the extra newlines are coming from, and it seems\nunlikely to be worth worrying over. This behavior is good enough for me.\n\nI concur that 0001 attached is committable. I have not looked at\nyour 0002, though.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 03 Mar 2021 18:32:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "I wrote:\n> I concur that 0001 attached is committable. I have not looked at\n> your 0002, though.\n\nOh ... grepping discovered one more loose end: mention of fe-protocol2.c\nhas to be removed from src/interfaces/libpq/nls.mk.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Mar 2021 18:44:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 04/03/2021 01:32, Tom Lane wrote:\n> Patched psql, trying to connect to a 7.3 server, reports this:\n> \n> $ psql -h ...\n> psql: error: connection to server at \"sss2\" (192.168.1.3), port 5432 failed: FATAL: unsupported frontend protocol\n> \n> $\n> \n> Conversely, 7.3 psql trying to connect to a patched server reports:\n> \n> $ psql -h ...\n> psql: FATAL: unsupported frontend protocol 2.0: server supports 3.0 to 3.0\n> \n> $\n> \n> I'm not sure where the extra newlines are coming from, and it seems\n> unlikely to be worth worrying over. This behavior is good enough for me.\n\nfe-connect.c appends a newline for any errors in pre-3.0 format:\n\n> \n> \t\t/*\n> \t\t * The postmaster typically won't end its message with a\n> \t\t * newline, so add one to conform to libpq conventions.\n> \t\t */\n> \t\tappendPQExpBufferChar(&conn->errorMessage, '\\n');\n\nThat comment is wrong. The postmaster *does* end all its error messages \nwith a newline. This changed in commit 9b4bfbdc2c in 7.2. Before that, \npostmaster had its own function, PacketSendError(), to send error \nmessages, and it did not append a newline. Commit 9b4bfbdc2 changed \npostmaster to use elog(...) like everyone else, and elog(...) has always \nappended a newline. So I think this extra newline that libpq adds is \nneeded if you try to connect to PostgreSQL 7.1 or earlier. I couldn't \ncommpile a 7.1 server to verify this, though.\n\nI changed that code in libpq to check if the message already has a \nnewline, and only append one if it doesn't. This fixes the extra newline \nwhen connecting with new libpq to a 7.3 server (and in the fork failure \nmessage).\n\n> I concur that 0001 attached is committable. I have not looked at\n> your 0002, though.\n\nRemoved the entry from nls.mk, and pushed 0001. Thanks!\n\n- Heikki\n\n\n", "msg_date": "Thu, 4 Mar 2021 10:59:41 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 04/03/2021 01:32, Tom Lane wrote:\n>> I'm not sure where the extra newlines are coming from, and it seems\n>> unlikely to be worth worrying over. This behavior is good enough for me.\n\n> fe-connect.c appends a newline for any errors in pre-3.0 format:\n\n>> \t\t/*\n>> \t\t * The postmaster typically won't end its message with a\n>> \t\t * newline, so add one to conform to libpq conventions.\n>> \t\t */\n>> \t\tappendPQExpBufferChar(&conn->errorMessage, '\\n');\n\n> That comment is wrong. The postmaster *does* end all its error messages \n> with a newline. This changed in commit 9b4bfbdc2c in 7.2.\n\nAh-hah, and the bit you show here came in with 2af360ed1, in 7.0.\nI'm surprised though that we didn't notice that the newline was now\nusually redundant. This was a commonly taken code path until 7.4.\n\nAnyway, your fix seems fine ... I wonder if we should back-patch it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Mar 2021 11:11:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> I concur that 0001 attached is committable. I have not looked at\n>> your 0002, though.\n\n> Removed the entry from nls.mk, and pushed 0001. Thanks!\n\nIt seems that buildfarm member walleye doesn't like this.\nSince nothing else is complaining, I confess bafflement\nas to why. walleye seems to be our only active mingw animal,\nso maybe there's a platform dependency somewhere ... but\nhow would (mostly) removal of code expose that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Mar 2021 15:04:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "On 04/03/2021 22:04, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>>> I concur that 0001 attached is committable. I have not looked at\n>>> your 0002, though.\n> \n>> Removed the entry from nls.mk, and pushed 0001. Thanks!\n> \n> It seems that buildfarm member walleye doesn't like this.\n> Since nothing else is complaining, I confess bafflement\n> as to why. walleye seems to be our only active mingw animal,\n> so maybe there's a platform dependency somewhere ... but\n> how would (mostly) removal of code expose that?\n\nStrange indeed. The commands that are crashing seem far detached from \nany FE/BE protocol handling, and I don't see any other pattern either:\n\n2021-03-04 05:08:45.953 EST [4080:94] DETAIL: Failed process was \nrunning: copy (insert into copydml_test default values) to stdout;\n\n2021-03-04 05:09:22.690 EST [4080:100] DETAIL: Failed process was \nrunning: CREATE INDEX CONCURRENTLY concur_index7 ON concur_heap(f1);\n\n2021-03-04 05:09:33.546 EST [4080:106] DETAIL: Failed process was \nrunning: ANALYZE vaccluster;\n\n2021-03-04 05:09:42.452 EST [4080:112] DETAIL: Failed process was \nrunning: FETCH BACKWARD 1 FROM foo24;\n\n2021-03-04 05:10:06.874 EST [4080:118] DETAIL: Failed process was \nrunning: REFRESH MATERIALIZED VIEW CONCURRENTLY mvtest_tvmm;\n\n2021-03-04 05:12:23.890 EST [4080:125] DETAIL: Failed process was \nrunning: CREATE SUBSCRIPTION regress_testsub CONNECTION 'testconn' \nPUBLICATION testpub;\n\n2021-03-04 05:15:46.421 EST [4080:297] DETAIL: Failed process was \nrunning: INSERT INTO xmltest VALUES (3, '<wrong');\n\nDare I suggest a compiler bug? gcc 8.1 isn't the fully up-to-date, \nalthough I don't know if there's a newer gcc available on this platform. \nJoseph, any chance we could see a backtrace or some other details from \nthose crashes?\n\n\n\n'drongo' just reported linker errors:\n\npostgres.def : error LNK2001: unresolved external symbol \nGetOldFunctionMessage \n[c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\npostgres.def : error LNK2001: unresolved external symbol errfunction \n[c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\npostgres.def : error LNK2001: unresolved external symbol pq_getstring \n[c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\npostgres.def : error LNK2001: unresolved external symbol pq_putbytes \n[c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\nRelease/postgres/postgres.lib : fatal error LNK1120: 4 unresolved \nexternals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\nDone Building Project \n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj\" (default \ntargets) -- FAILED.\n\nLooks like it wasn't a clean build, those functions and all the callers \nwere removed by the patch. That's a separate issue than on 'walleye' - \nunless that was also not a completely clean build?\n\n- Heikki\n\n\n", "msg_date": "Thu, 4 Mar 2021 22:55:54 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "\nOn 3/4/21 3:55 PM, Heikki Linnakangas wrote:\n>\n>\n>\n>\n> 'drongo' just reported linker errors:\n>\n> postgres.def : error LNK2001: unresolved external symbol\n> GetOldFunctionMessage\n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> postgres.def : error LNK2001: unresolved external symbol errfunction\n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> postgres.def : error LNK2001: unresolved external symbol pq_getstring\n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> postgres.def : error LNK2001: unresolved external symbol pq_putbytes\n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> Release/postgres/postgres.lib : fatal error LNK1120: 4 unresolved\n> externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> Done Building Project\n> \"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj\" (default\n> targets) -- FAILED.\n>\n> Looks like it wasn't a clean build, those functions and all the\n> callers were removed by the patch. That's a separate issue than on\n> 'walleye' - unless that was also not a completely clean build?\n>\n>\n\n\nYes, pilot error :-)(\n\n\nIt's rerunning and should report clean shortly\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 4 Mar 2021 16:17:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Joseph, any chance we could see a backtrace or some other details from \n> those crashes?\n\n+1\n\n> 'drongo' just reported linker errors:\n> postgres.def : error LNK2001: unresolved external symbol \n> GetOldFunctionMessage \n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> postgres.def : error LNK2001: unresolved external symbol errfunction \n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> postgres.def : error LNK2001: unresolved external symbol pq_getstring \n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> postgres.def : error LNK2001: unresolved external symbol pq_putbytes \n> [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> Release/postgres/postgres.lib : fatal error LNK1120: 4 unresolved \n> externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj]\n> Done Building Project \n> \"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\postgres.vcxproj\" (default \n> targets) -- FAILED.\n\nAs far as that goes, I think suspicion has to fall on this:\n\n Not re-generating POSTGRES.DEF, file already exists.\n\nwhich gendef.pl prints if it thinks the def file is newer than\nall the inputs. So either drongo had some kind of clock skew\nissue, or that bit of logic in gendef.pl has some unobvious bug.\n\n(I say \"had\" because I see the next run went fine.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Mar 2021 16:35:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Removing support for COPY FROM STDIN in protocol version 2" } ]
[ { "msg_contents": "Cluster file encryption plans to use the LSN and page number as the\nnonce for heap/index pages. I am looking into the use of a unique nonce\nduring hint bit changes. (You need to use a new nonce for re-encrypting\na page that changes.)\n\nlog_hint_bits already gives us a unique nonce for the first hint bit\nchange on a page during a checkpoint, but we only encrypt on page write\nto the file system, so I am researching if log_hint_bits will already\ngenerate a unique LSN for every page write to the file system, even if\nthere are multiple hint-bit-caused page writes to the file system during\na single checkpoint. (We already know this works for multiple\ncheckpoints.)\n\nOur docs on full_page_writes states:\n\n\tWhen this parameter is on, the\n\t<productname>PostgreSQL</productname> server writes the entire\n\tcontent of each disk page to WAL during the first modification\n\tof that page after a checkpoint.\n\nand wal_log_hints states:\n\n\tWhen this parameter is <literal>on</literal>, the\n\t<productname>PostgreSQL</productname> server writes the entire\n\tcontent of each disk page to WAL during the first modification of\n\tthat page after a checkpoint, even for non-critical modifications\n\tof so-called hint bits.\n\nHowever, imagine these steps:\n\n1. checkpoint starts\n2. page is modified by row or hint bit change\n3. page gets a new LSN and is marked as dirty\n4. page image is flushed to WAL\n5. pages is written to disk and marked as clean\n6. page is modified by data or hint bit change\n7. pages gets a new LSN and is marked as dirty\n8. page image is flushed to WAL\n9. checkpoint completes\n10. pages is written to disk and marked as clean\n\nIs the above case valid, and would it cause two full page writes to WAL?\nMore specifically, wouldn't it cause every write of the page to the file\nsystem to use a new LSN?\n\nIf so, this means wal_log_hints is sufficient to guarantee a new nonce\nfor every page image, even for multiple hint bit changes and page writes\nduring a single checkpoint, and there is then no need for a hit bit\ncounter on the page --- the unique LSN does that for us. I know\nlog_hint_bits was designed to fix torn pages, but it seems to also do\nexactly what cluster file encryption needs.\n\nIf the above is all true, should we update the docs, READMEs, or C\ncomments about this? I think the cluster file encryption patch would at\nleast need to document that we need to keep this behavior, because I\ndon't think log_hint_bits needs to behave this way for checksum\npurposes because of the way full page writes are processed during crash\nrecovery.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 3 Feb 2021 18:05:56 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Multiple full page writes in a single checkpoint?" }, { "msg_contents": "Hi,\n\nOn 2021-02-03 18:05:56 -0500, Bruce Momjian wrote:\n> log_hint_bits already gives us a unique nonce for the first hint bit\n> change on a page during a checkpoint, but we only encrypt on page write\n> to the file system, so I am researching if log_hint_bits will already\n> generate a unique LSN for every page write to the file system, even if\n> there are multiple hint-bit-caused page writes to the file system during\n> a single checkpoint. (We already know this works for multiple\n> checkpoints.)\n\nNo, it won't:\n\n> However, imagine these steps:\n> \n> 1. checkpoint starts\n> 2. page is modified by row or hint bit change\n> 3. page gets a new LSN and is marked as dirty\n> 4. page image is flushed to WAL\n> 5. pages is written to disk and marked as clean\n> 6. page is modified by data or hint bit change\n> 7. pages gets a new LSN and is marked as dirty\n> 8. page image is flushed to WAL\n> 9. checkpoint completes\n> 10. pages is written to disk and marked as clean\n> \n> Is the above case valid, and would it cause two full page writes to WAL?\n> More specifically, wouldn't it cause every write of the page to the file\n> system to use a new LSN?\n\nNo. 8) won't happen. Look e.g. at XLogSaveBufferForHint():\n\n /*\n * Update RedoRecPtr so that we can make the right decision\n */\n RedoRecPtr = GetRedoRecPtr();\n\n /*\n * We assume page LSN is first data on *every* page that can be passed to\n * XLogInsert, whether it has the standard page layout or not. Since we're\n * only holding a share-lock on the page, we must take the buffer header\n * lock when we look at the LSN.\n */\n lsn = BufferGetLSNAtomic(buffer);\n\n if (lsn <= RedoRecPtr)\n /* wal log hint bit */\n\nThe RedoRecPtr is determined at 1. and doesn't change between 4) and\n8). The LSN for 4) has to be *past* the RedoRecPtr from 1). Therefore we\ndon't do another FPW.\n\n\nChanging this is *completely* infeasible. In a lot of workloads it'd\ncause a *massive* explosion of WAL volume. Like quadratically. You'll\nneed to find another way to generate a nonce.\n\nIn the non-hint bit case you'll automatically have a higher LSN in 7/8\nthough. So you won't need to do anything about getting a higher nonce.\n\nFor the hint bit case in 8 you could consider just using any LSN generated\nafter 4 (preferrably already flushed to disk) - but that seems somewhat\nugly from a debuggability POV :/. Alternatively you could just create\ntiny WAL record to get a new LSN, but that'll sometimes trigger new WAL\nflushes when the pages are dirtied.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Feb 2021 15:29:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Multiple full page writes in a single checkpoint?" }, { "msg_contents": "On Wed, Feb 3, 2021 at 03:29:13PM -0800, Andres Freund wrote:\n> > Is the above case valid, and would it cause two full page writes to WAL?\n> > More specifically, wouldn't it cause every write of the page to the file\n> > system to use a new LSN?\n> \n> No. 8) won't happen. Look e.g. at XLogSaveBufferForHint():\n> \n> /*\n> * Update RedoRecPtr so that we can make the right decision\n> */\n> RedoRecPtr = GetRedoRecPtr();\n> \n> /*\n> * We assume page LSN is first data on *every* page that can be passed to\n> * XLogInsert, whether it has the standard page layout or not. Since we're\n> * only holding a share-lock on the page, we must take the buffer header\n> * lock when we look at the LSN.\n> */\n> lsn = BufferGetLSNAtomic(buffer);\n> \n> if (lsn <= RedoRecPtr)\n> /* wal log hint bit */\n> \n> The RedoRecPtr is determined at 1. and doesn't change between 4) and\n> 8). The LSN for 4) has to be *past* the RedoRecPtr from 1). Therefore we\n> don't do another FPW.\n\nOK, so, what is happening is that it knows the page LSN is after the\nstart of the current checkpoint (the redo point), so it knows not do to\na full page write again? Smart, and makes sense.\n\n> Changing this is *completely* infeasible. In a lot of workloads it'd\n> cause a *massive* explosion of WAL volume. Like quadratically. You'll\n> need to find another way to generate a nonce.\n\nDo we often do multiple writes to the file system of the same page\nduring a single checkpoint, particularly only-hint-bit-modified pages?\nI didn't think so.\n\n> In the non-hint bit case you'll automatically have a higher LSN in 7/8\n> though. So you won't need to do anything about getting a higher nonce.\n\nYes, I was counting on that. :-)\n\n> For the hint bit case in 8 you could consider just using any LSN generated\n> after 4 (preferably already flushed to disk) - but that seems somewhat\n> ugly from a debuggability POV :/. Alternatively you could just create\n> tiny WAL record to get a new LSN, but that'll sometimes trigger new WAL\n> flushes when the pages are dirtied.\n\nYes, that would make sense. I do need the first full page write during\na checkpoint to be sure I don't have torn pages that have some part of\nthe page encrypted with one LSN and a second part with a different LSN. \nYou are right that I don't need a second full page write during the same\ncheckpoint because a torn page would just restore the first full page\nwrite and throw away the second LSN and hint bit changes, which is fine.\n\nI hadn't gotten to ask about that until I found if the previous\nassumptions were true, which they were not.\n\nIs the logical approach here to modify XLogSaveBufferForHint() so if a\npage write is not needed, to create a dummy WAL record that just\nincrements the WAL location and updates the page LSN? (Is there a small\nWAL record I should reuse?) I can try to add a hint-bit-page-write page\ncounter, but that might overflow, and then we will need a way to change\nthe LSN anyway.\n\nI am researching this so I can give a clear report on the impact of\nadding this feature. I will update the wiki once we figure this out.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 3 Feb 2021 19:21:25 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Multiple full page writes in a single checkpoint?" }, { "msg_contents": "Hi,\n\nOn 2021-02-03 19:21:25 -0500, Bruce Momjian wrote:\n> On Wed, Feb 3, 2021 at 03:29:13PM -0800, Andres Freund wrote:\n> > Changing this is *completely* infeasible. In a lot of workloads it'd\n> > cause a *massive* explosion of WAL volume. Like quadratically. You'll\n> > need to find another way to generate a nonce.\n>\n> Do we often do multiple writes to the file system of the same page\n> during a single checkpoint, particularly only-hint-bit-modified pages?\n> I didn't think so.\n\nIt can easily happen. Consider ringbuffer using scans (like vacuum,\nseqscan) - they'll force the buffer out to disk soon after it's been\ndirtied. And often will read the same page again a short bit later. Or\njust any workload that's a bit bigger than shared buffers (but data is\nin the OS cache). Subsequent scans will often have new hint bits to\nset.\n\n\n> Is the logical approach here to modify XLogSaveBufferForHint() so if a\n> page write is not needed, to create a dummy WAL record that just\n> increments the WAL location and updates the page LSN?\n> (Is there a small WAL record I should reuse?)\n\nI think an explicit record type would be better. Or a hint record\nwithout an associated FPW.\n\n\n> I can try to add a hint-bit-page-write page counter, but that might\n> overflow, and then we will need a way to change the LSN anyway.\n\nThat's just a question of width...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 3 Feb 2021 17:00:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Multiple full page writes in a single checkpoint?" }, { "msg_contents": "On Wed, Feb 3, 2021 at 05:00:19PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2021-02-03 19:21:25 -0500, Bruce Momjian wrote:\n> > On Wed, Feb 3, 2021 at 03:29:13PM -0800, Andres Freund wrote:\n> > > Changing this is *completely* infeasible. In a lot of workloads it'd\n> > > cause a *massive* explosion of WAL volume. Like quadratically. You'll\n> > > need to find another way to generate a nonce.\n> >\n> > Do we often do multiple writes to the file system of the same page\n> > during a single checkpoint, particularly only-hint-bit-modified pages?\n> > I didn't think so.\n> \n> It can easily happen. Consider ringbuffer using scans (like vacuum,\n> seqscan) - they'll force the buffer out to disk soon after it's been\n> dirtied. And often will read the same page again a short bit later. Or\n> just any workload that's a bit bigger than shared buffers (but data is\n> in the OS cache). Subsequent scans will often have new hint bits to\n> set.\n\nOh, good point.\n\n> > Is the logical approach here to modify XLogSaveBufferForHint() so if a\n> > page write is not needed, to create a dummy WAL record that just\n> > increments the WAL location and updates the page LSN?\n> > (Is there a small WAL record I should reuse?)\n> \n> I think an explicit record type would be better. Or a hint record\n> without an associated FPW.\n\nOK.\n\n> > I can try to add a hint-bit-page-write page counter, but that might\n> > overflow, and then we will need a way to change the LSN anyway.\n> \n> That's just a question of width...\n\nYeah, the hint bit counter is just delaying the inevitasble, plus it\nchanges the page format, which I am trying to avoid. Also, I need this\ndummy record only if the page is marked clean, meaning a write\nto the file system already happened in the current checkpoint --- should\nnot be to bad.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 3 Feb 2021 20:07:16 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Multiple full page writes in a single checkpoint?" }, { "msg_contents": "On Wed, Feb 3, 2021 at 08:07:16PM -0500, Bruce Momjian wrote:\n> > > I can try to add a hint-bit-page-write page counter, but that might\n> > > overflow, and then we will need a way to change the LSN anyway.\n> > \n> > That's just a question of width...\n> \n> Yeah, the hint bit counter is just delaying the inevitable, plus it\n> changes the page format, which I am trying to avoid. Also, I need this\n> dummy record only if the page is marked clean, meaning a write\n> to the file system already happened in the current checkpoint --- should\n> not be to bad.\n\nHere is a proof-of-concept patch to do this. Thanks for your help.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Wed, 3 Feb 2021 22:28:35 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Multiple full page writes in a single checkpoint?" }, { "msg_contents": "On Wed, Feb 3, 2021 at 08:07:16PM -0500, Bruce Momjian wrote:\n> > > I can try to add a hint-bit-page-write page counter, but that might\n> > > overflow, and then we will need a way to change the LSN anyway.\n> > \n> > That's just a question of width...\n> \n> Yeah, the hint bit counter is just delaying the inevitable, plus it\n> changes the page format, which I am trying to avoid. Also, I need this\n> dummy record only if the page is marked clean, meaning a write\n> to the file system already happened in the current checkpoint --- should\n> not be to bad.\n\nIn looking your comments on Sawada-san's POC patch for buffer\nencryption:\n\n\thttps://www.postgresql.org/message-id/20210112193431.2edcz776qjen7kao%40alap3.anarazel.de\n\nI see that he put a similar function call in exactly the same place I\ndid, but you pointed out that he was inserting into WAL while holding a\nbuffer lock.\n\nI restructured my patch to not make that same mistake, and modified it\nfor non-permanent buffers --- attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Thu, 4 Feb 2021 13:49:32 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Multiple full page writes in a single checkpoint?" } ]
[ { "msg_contents": "Hackers,\n\nThe following errhint in pg_read_file() makes little sense:\n\n errhint(\"Consider using %s, which is part of core, instead.\",\n \"pg_file_read()\")));\n\nGrep'ing through master, there is almost nothing named pg_file_read, and what does exist is dropped when upgrading to adminpack 2.0:\n\nPerhaps this errhint made sense at some point in the past? It looks like core only uses this C-function named \"pg_read_file\" by the SQL function named \"pg_read_file_old\", but adminpack-1.0 also used it for a SQL function named pg_file_read, which gets dropped in the adminpack--1.1--2.0.sql upgrade file. If you haven't upgraded adminpack, it makes little sense to call adminpack's pg_file_read() function and get a hint telling you to instead use pg_file_read(). But calling pg_read_file_old() and being told to use pg_file_read() instead also doesn't make sense, because it doesn't exist.\n\nI was going to submit a patch for this, but the more I look at it the less I understand what is intended by this code. Thoughts?\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 3 Feb 2021 17:48:35 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "possibly outdated pg_file_read() errhint" } ]
[ { "msg_contents": "Hi,\n\nSecond paragraph of this comment (procarray.c:1604) says:\n* See the definition of ComputedXidHorizonsResult for the various computed\n\nIt should say ComputeXidHorizonsResult (it has an extra \"d\" in Computed)\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Wed, 3 Feb 2021 22:58:24 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "typo in \"Determine XID horizons\" comment in procarray.c" } ]
[ { "msg_contents": "Hello,\n\nI noticed that CheckAttributeNamesTypes() prevents to create a table that has\nmore than MaxHeapAttributeNumber (1600) columns, for foreign-table also.\nIIUC, this magic number comes from length of the null-bitmap can be covered\nwith t_hoff in HeapTupleHeaderData.\nFor heap-tables, it seems to me a reasonable restriction to prevent overrun of\nnull-bitmap. On the other hand, do we have proper reason to apply same\nrestrictions on foreign-tables also?\n\nForeign-tables have their own unique internal data structures instead of\nthe PostgreSQL's heap-table, and some of foreign-data can have thousands\nattributes in their structured data.\nI think that MaxHeapAttributeNumber is a senseless restriction for foreign-\ntables. How about your opinions?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 4 Feb 2021 16:24:01 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": true, "msg_subject": "Is MaxHeapAttributeNumber a reasonable restriction for\n foreign-tables?" }, { "msg_contents": "Hello,\n\nOn Thu, Feb 4, 2021 at 4:24 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> I noticed that CheckAttributeNamesTypes() prevents to create a table that has\n> more than MaxHeapAttributeNumber (1600) columns, for foreign-table also.\n> IIUC, this magic number comes from length of the null-bitmap can be covered\n> with t_hoff in HeapTupleHeaderData.\n> For heap-tables, it seems to me a reasonable restriction to prevent overrun of\n> null-bitmap. On the other hand, do we have proper reason to apply same\n> restrictions on foreign-tables also?\n>\n> Foreign-tables have their own unique internal data structures instead of\n> the PostgreSQL's heap-table, and some of foreign-data can have thousands\n> attributes in their structured data.\n> I think that MaxHeapAttributeNumber is a senseless restriction for foreign-\n> tables. How about your opinions?\n\nMy first reaction to this was a suspicion that the\nMaxHeapAttributeNumber limit would be too ingrained in PostgreSQL's\narchitecture to consider this matter lightly, but actually browsing\nthe code, that may not really be the case. Other than\nsrc/backend/access/heap/*, here are the places that check it:\n\ncatalog/heap.c: CheckAttributeNamesTypes() that you mentioned:\n\n /* Sanity check on column count */\n if (natts < 0 || natts > MaxHeapAttributeNumber)\n ereport(ERROR,\n (errcode(ERRCODE_TOO_MANY_COLUMNS),\n errmsg(\"tables can have at most %d columns\",\n MaxHeapAttributeNumber)));\n\ntablecmds.c: MergeAttributes():\n\n /*\n * Check for and reject tables with too many columns. We perform this\n * check relatively early for two reasons: (a) we don't run the risk of\n * overflowing an AttrNumber in subsequent code (b) an O(n^2) algorithm is\n * okay if we're processing <= 1600 columns, but could take minutes to\n * execute if the user attempts to create a table with hundreds of\n * thousands of columns.\n *\n * Note that we also need to check that we do not exceed this figure after\n * including columns from inherited relations.\n */\n if (list_length(schema) > MaxHeapAttributeNumber)\n ereport(ERROR,\n (errcode(ERRCODE_TOO_MANY_COLUMNS),\n errmsg(\"tables can have at most %d columns\",\n MaxHeapAttributeNumber)));\n\n\ntablecmds.c: ATExecAddColumn():\n\n /* Determine the new attribute's number */\n newattnum = ((Form_pg_class) GETSTRUCT(reltup))->relnatts + 1;\n if (newattnum > MaxHeapAttributeNumber)\n ereport(ERROR,\n (errcode(ERRCODE_TOO_MANY_COLUMNS),\n errmsg(\"tables can have at most %d columns\",\n MaxHeapAttributeNumber)));\n\nSo, unless I am terribly wrong, we may have a shot at revisiting the\ndecision that would have set this limit.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Feb 2021 21:35:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is MaxHeapAttributeNumber a reasonable restriction for\n foreign-tables?" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Feb 4, 2021 at 4:24 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n>> I think that MaxHeapAttributeNumber is a senseless restriction for foreign-\n>> tables. How about your opinions?\n\n> My first reaction to this was a suspicion that the\n> MaxHeapAttributeNumber limit would be too ingrained in PostgreSQL's\n> architecture to consider this matter lightly, but actually browsing\n> the code, that may not really be the case.\n\nYou neglected to search for MaxTupleAttributeNumber...\n\nI'm quite skeptical of trying to raise this limit significantly.\n\nIn the first place, you'd have to worry about the 2^15 limit on\nint16 AttrNumbers --- and keep in mind that that has to be enough\nfor reasonable-size joins, not only an individual table. If you\njoin a dozen or so max-width tables, you're already most of the way\nto that limit.\n\nIn the second place, as noted by the comment you quoted, there are\nalgorithms in various places that are O(N^2) (or maybe even worse?)\nin the number of columns they're dealing with.\n\nIn the third place, I've yet to see a use-case that didn't represent\ncrummy table design. Pushing the table off to a remote server doesn't\nmake it less crummy design.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Feb 2021 09:45:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is MaxHeapAttributeNumber a reasonable restriction for\n foreign-tables?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 11:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Feb 4, 2021 at 4:24 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> >> I think that MaxHeapAttributeNumber is a senseless restriction for foreign-\n> >> tables. How about your opinions?\n>\n> > My first reaction to this was a suspicion that the\n> > MaxHeapAttributeNumber limit would be too ingrained in PostgreSQL's\n> > architecture to consider this matter lightly, but actually browsing\n> > the code, that may not really be the case.\n>\n> You neglected to search for MaxTupleAttributeNumber...\n\nAh, I did. Although, even its usage seems mostly limited to modules\nunder src/backend/access/heap.\n\n> I'm quite skeptical of trying to raise this limit significantly.\n>\n> In the first place, you'd have to worry about the 2^15 limit on\n> int16 AttrNumbers --- and keep in mind that that has to be enough\n> for reasonable-size joins, not only an individual table. If you\n> join a dozen or so max-width tables, you're already most of the way\n> to that limit.\n>\n> In the second place, as noted by the comment you quoted, there are\n> algorithms in various places that are O(N^2) (or maybe even worse?)\n> in the number of columns they're dealing with.\n\nThose are certainly intimidating considerations.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 00:06:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is MaxHeapAttributeNumber a reasonable restriction for\n foreign-tables?" }, { "msg_contents": "2021年2月4日(木) 23:45 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Feb 4, 2021 at 4:24 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> >> I think that MaxHeapAttributeNumber is a senseless restriction for foreign-\n> >> tables. How about your opinions?\n>\n> > My first reaction to this was a suspicion that the\n> > MaxHeapAttributeNumber limit would be too ingrained in PostgreSQL's\n> > architecture to consider this matter lightly, but actually browsing\n> > the code, that may not really be the case.\n>\n> You neglected to search for MaxTupleAttributeNumber...\n>\n> I'm quite skeptical of trying to raise this limit significantly.\n>\n> In the first place, you'd have to worry about the 2^15 limit on\n> int16 AttrNumbers --- and keep in mind that that has to be enough\n> for reasonable-size joins, not only an individual table. If you\n> join a dozen or so max-width tables, you're already most of the way\n> to that limit.\n>\nfree_parsestate() also prevents to use target-list more than\nMaxTupleAttributeNumber.\n(But it is reasonable restriction because we cannot guarantee that\nHeapTupleTableSlot\nis not used during query execution.)\n\n> In the second place, as noted by the comment you quoted, there are\n> algorithms in various places that are O(N^2) (or maybe even worse?)\n> in the number of columns they're dealing with.\n>\nOnly table creation time, isn't it?\nIf N is not small (probably >100), we can use temporary HTAB to ensure\nduplicated column-name is not supplied.\n\n> In the third place, I've yet to see a use-case that didn't represent\n> crummy table design. Pushing the table off to a remote server doesn't\n> make it less crummy design.\n>\nI met this limitation to create a foreign-table that try to map Apache\nArrow file that\ncontains ~2,500 attributes of scientific observation data.\nApache Arrow internally has columnar format, and queries to this\ndata-set references\nup to 10-15 columns on average. So, it shall make the query execution much more\nefficient.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Fri, 5 Feb 2021 00:20:22 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: Is MaxHeapAttributeNumber a reasonable restriction for\n foreign-tables?" } ]
[ { "msg_contents": "Hi, hacker\n\nI found in function ECPGconnect, the connect string in comment is written as:\n\n/*------\n * new style:\n *\t<tcp|unix>:postgresql://server[:port|:/unixsocket/path:]\n *\t[/db-name][?options]\n *------\n*/\n\nBut, the parse logical seems wrong, like:\n\ntmp = strrchr(dbname + offset, ':');\ntmp2 = strchr(tmp + 1, ':')\n\nthe value tmp2 will always be NULL, the unix-socket path will be ignored.\n\nI have fixed this problem, the patch attached. \nHowever, since this usage is not recorded in manual[1](maybe this is why this problem is not found for a long time), so how about delete this source directly instead?\nThoughts?\n\nThis patch only fix the problem when using a character variable to store the connect string like:\n\nEXEC SQL BEGIN DECLARE SECTION;\n char constr[] = \"unix:postgresql://localhost:/tmp/a:?port=5435&dbname=postgres\";\nEXEC SQL END DECLARE SECTION;\n\nIf I write a source like:\nEXEC SQL CONNECT TO unix:postgresql://localhost:/tmp/a:/postgres?port=5435\nEXEC SQL CONNECT TO unix:postgresql://localhost/postgres?host=/tmp/a&port=5435\nThe program ecpg will report some error when parse .pgc file\n\nI will try to fix this problem later, but it seems a little difficult to add some lex/bison file rules\n\n[1] https://www.postgresql.org/docs/13/ecpg-connect.html#ECPG-CONNECTING\n\nBest regards\nShenhao Wang", "msg_date": "Thu, 4 Feb 2021 09:25:00 +0000", "msg_from": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "parse mistake in ecpg connect string" }, { "msg_contents": "At Thu, 4 Feb 2021 09:25:00 +0000, \"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com> wrote in \n> Hi, hacker\n> \n> I found in function ECPGconnect, the connect string in comment is written as:\n> \n> /*------\n> * new style:\n> *\t<tcp|unix>:postgresql://server[:port|:/unixsocket/path:]\n> *\t[/db-name][?options]\n> *------\n> */\n> \n> But, the parse logical seems wrong, like:\n\nActually it looks like broken, but..\n \n> [1] https://www.postgresql.org/docs/13/ecpg-connect.html#ECPG-CONNECTING\n\nThe comment and related code seem to be remnants of an ancient syntax\nof hostname/socket-path style, which should have been cleaned up in\n2000. I guess that the tcp: and unix: style target remains just for\nbackward compatibility, I'm not sure, though. Nowadays you can do\nthat by using the \"dbname[@hostname][:port]\" style target.\n\nEXEC SQL CONNECT TO 'postgres@/tmp:5432';\nEXEC SQL CONNECT TO 'unix:postgresql://localhost:5432/postgres?host=/tmp';\n\nFWIW, directly embedding /unixsocket/path syntax in a URL is broken in\nthe view of URI. It is the reason why the current connection URI takes\nthe way shown above. So I think we want to remove that code rather\nthan to fix it.\n\nAnd, since the documentation is saying that the bare target\nspecification is somewhat unstable, I'm not sure we dare to *fix* the\necpg syntax.\n\nIn [1]\n> In practice, it is probably less error-prone to use a (single-quoted)\n> string literal or a variable reference.\n\nThat being said, we might need a description about how we can specify\na unix socket directory in ecpg-connect.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 08 Feb 2021 12:00:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse mistake in ecpg connect string" }, { "msg_contents": "Dear Wang,\n\n> the value tmp2 will always be NULL, the unix-socket path will be ignored.\n\nI confirmed it, you're right.\n\n> I have fixed this problem, the patch attached.\n\nIt looks good to me:-).\n\n> I will try to fix this problem later, but it seems a little difficult to add some lex/bison file rules\n\nI think rule `connection_target` must be fixed.\nEither port or unix_socket_directory can be accepted now(I followed the comment),\nbut we should discuss about it.\nAccording to the doc, libpq's connection-URI accept both.\n\nThe attached patch contains all fixes, and pass test in my environment.\nAnd the following line:\n\n EXEC SQL CONNECT TO unix:postgresql://localhost:/a:/postgres;\n\nis precompiled to:\n\n { ECPGconnect(__LINE__, 0, \"unix:postgresql://localhost:/a:/postgres\" , NULL, NULL , NULL, 0); }\n\nIs it OK?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Mon, 8 Feb 2021 03:02:26 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: parse mistake in ecpg connect string" }, { "msg_contents": "Dear Horiguchi-san,\n\nMy response crossed in the e-mail with yours. Sorry.\n\n> FWIW, directly embedding /unixsocket/path syntax in a URL is broken in\n> the view of URI. It is the reason why the current connection URI takes\n> the way shown above. So I think we want to remove that code rather\n> than to fix it.\n\nI didn't know such a phenomenon. If other codes follow the rule,\nI agree yours.\n\nDigress from the main topic, but the following cannot be accepted for the precompiler.\nThis should be fixed, isn't it?\n\nEXEC SQL CONNECT TO postgres@/tmp:5432;\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Mon, 8 Feb 2021 03:25:58 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: parse mistake in ecpg connect string" }, { "msg_contents": "Hi, Horiguchi-san, Kuroda-san:\n\nThank you for reviewing.\n\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n>FWIW, directly embedding /unixsocket/path syntax in a URL is broken in\n>the view of URI. It is the reason why the current connection URI takes\n>the way shown above. So I think we want to remove that code rather\n>than to fix it.\n\nIt seems that remove that code is better.\n\n>That being said, we might need a description about how we can specify\n>a unix socket directory in ecpg-connect.\n\nAfter remove the code, if target is:\n1. dbname@/unixsocket/path:port\n2. unix:postgresql://localhost:port/dbname?host=/unixsocket/path\nThe ecpg will report an error.\n\nBut, if target is:\n3. a (single-quoted) string literal of 1 or 2 listed above.\n4. a variable reference of 1 or 2 listed above.\nThe ecpg will precompile successfully. That means if we want to use a unix socket directory in ecpg-connect.\nWe can only use No.3 and No.4 listed above.\n\nI think we can add some description on docs, but I don't have ability to write description in English,\nCan someone help me write a description?\n\nKuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n\n>Digress from the main topic, but the following cannot be accepted for the precompiler.\n>This should be fixed, isn't it?\n>\n>EXEC SQL CONNECT TO postgres@/tmp:5432;\n\nFirst, thank you for adding a bison rule.\n\nI think add the bison rule is a little difficult because in PG13 windows can also support unix-socket, \nIn your patch:\n> dir_name: '/' dir_name\t\t{ $$ = make2_str(mm_strdup(\"/\"), $2); }\n> \t\t| ecpg_ident\t\t{ $$ = $1; }\n>\t\t;\nWindows will remains wrong(I'm not sure ecpg on windows can use unix socket connection).\n\nAnd if we add the rules in bison files, both ecpg and ecpglib will both parse the host in different ways.\nEcpg parse the host by bison rules, and ecpglib parse the host by splitting the connect string use char '@' or char '='.\nI think it's not a good action.\n\nBut If we add some description on docs, these problem can be solved in an easy way.\nTherefore, I prefer to add some description on docs.\n\n\nBest regards\nShenhao Wang\n\n\n\n\n", "msg_date": "Mon, 8 Feb 2021 08:34:47 +0000", "msg_from": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: parse mistake in ecpg connect string" }, { "msg_contents": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com> writes:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> FWIW, directly embedding /unixsocket/path syntax in a URL is broken in\n>> the view of URI. It is the reason why the current connection URI takes\n>> the way shown above. So I think we want to remove that code rather\n>> than to fix it.\n\n> It seems that remove that code is better.\n\nFWIW, I agree with Horiguchi-san that we should just take out the dead\ncode in ECPGconnect(). Some checking in our git history shows that it's\nnever worked since it was added (in a4f25b6a9c2). If nobody's noticed\nin 18 years, and the documentation doesn't say that it should work,\nthen that's not a feature we need to support.\n\nI do agree that it'd be a good idea to extend the documentation to\npoint out how to specify a non-default socket path; but I'm content\nto say that a \"?host=\" option is the only way to do that.\n\nI also got a bit of a laugh out of\n\n if (strcmp(dbname + offset, \"localhost\") != 0 && strcmp(dbname + offset, \"127.0.0.1\") != 0)\n\nShould we allow \"::1\" here as well? On the other hand, colons are\nalready overloaded in this syntax, so maybe allowing them in the\nhost part is a bad idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Feb 2021 14:28:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: parse mistake in ecpg connect string" }, { "msg_contents": "Dear Wang, Tom\n\n> I think add the bison rule is a little difficult because in PG13 windows can also support unix-socket, \n> In your patch:\n> > dir_name: '/' dir_name{ $$ = make2_str(mm_strdup(\"/\"), $2); }\n> > | ecpg_ident{ $$ = $1; }\n> >;\n> Windows will remains wrong(I'm not sure ecpg on windows can use unix socket connection).\n> \n> And if we add the rules in bison files, both ecpg and ecpglib will both parse the host in different ways.\n> Ecpg parse the host by bison rules, and ecpglib parse the host by splitting the connect string use char '@' or char '='.\n> I think it's not a good action.\n> \n> But If we add some description on docs, these problem can be solved in an easy way.\n> Therefore, I prefer to add some description on docs.\n\nI didn't care about the windows environment.\nSomewhat WIN32 directive can be used for switching code, but I agree your claims.\n\n> I think we can add some description on docs, but I don't have ability to write description in English,\n> Can someone help me write a description?\n\nI'm also not a native English speaker, but I put a draft.\nPlease review it and combine them if it's OK.\n\n> Should we allow \"::1\" here as well? On the other hand, colons are\n> already overloaded in this syntax, so maybe allowing them in the\n> host part is a bad idea.\n\nI have no idea how to fix it now, so I added notice that IPv6 should not be used\nin the host part...\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Tue, 9 Feb 2021 02:12:37 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: parse mistake in ecpg connect string" }, { "msg_contents": "At Tue, 9 Feb 2021 02:12:37 +0000, \"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> wrote in \n> Dear Wang, Tom\n> \n> > I think add the bison rule is a little difficult because in PG13 windows can also support unix-socket, \n> > In your patch:\n> > > dir_name: '/' dir_name{ $$ = make2_str(mm_strdup(\"/\"), $2); }\n> > > | ecpg_ident{ $$ = $1; }\n> > >;\n> > Windows will remains wrong(I'm not sure ecpg on windows can use unix socket connection).\n> > \n> > And if we add the rules in bison files, both ecpg and ecpglib will both parse the host in different ways.\n> > Ecpg parse the host by bison rules, and ecpglib parse the host by splitting the connect string use char '@' or char '='.\n> > I think it's not a good action.\n> > \n> > But If we add some description on docs, these problem can be solved in an easy way.\n> > Therefore, I prefer to add some description on docs.\n> \n> I didn't care about the windows environment.\n> Somewhat WIN32 directive can be used for switching code, but I agree your claims.\n\nThis thread looks like discussing about unix-domain socket on\nWindows. (I'll look into it.)\n\n> > I think we can add some description on docs, but I don't have ability to write description in English,\n> > Can someone help me write a description?\n> \n> I'm also not a native English speaker, but I put a draft.\n> Please review it and combine them if it's OK.\n> \n> > Should we allow \"::1\" here as well? On the other hand, colons are\n> > already overloaded in this syntax, so maybe allowing them in the\n> > host part is a bad idea.\n\nYeah, that made me smile for the same reason:p\n\n> I have no idea how to fix it now, so I added notice that IPv6 should not be used\n> in the host part...\n\nAnyway the host part for the unix: method is just for\nspelling. Although I think we can further remove ipv4 address, we\ndon't even need to bother that. (However, I don't object to add \"::1\"\neither.)\n\nI think replacing \"hostname\" in the unix: method to \"localhost\" works.\n\n> dbname[@hostname][:port]\n> tcp:postgresql://hostname[:port][/dbname][?options]\n- unix:postgresql://<italic>hostname</>[:<i>port</>]..\n+ unix:postgresql://<nonitalic>localhost</>[:<i>port</>]..\n\n@@ -199,6 +199,13 @@ EXEC SQL CONNECT TO <replaceable>target</replaceable> <optional>AS <replaceable>\n any <replaceable>keyword</replaceable> or <replaceable>value</replaceable>,\n though not within or after one. Note that there is no way to\n write <literal>&amp;</literal> within a <replaceable>value</replaceable>.\n+\n+ Also note that if you want to specify the socket directory\n+ for Unix-domain communications, an option <replaceable>host=</replaceable>\n+ and single-quoted string must be used.\n+ The notation rule is almost the same as libpq's one,\n+ but the IPv6 address cannot be used here.\n+\n\n<replaceable> is not the tag to use for \"host\". <varname> is that.\n\nIf we change the \"hostname\" for the unix: method to be fixed, no need\nto mention IP address. (Even if someone tries using IP addresses\ninstead of localhost, that case is not our business:p)\n\nHow about the attached?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 09 Feb 2021 13:58:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse mistake in ecpg connect string" }, { "msg_contents": "At Tue, 09 Feb 2021 13:58:14 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > I didn't care about the windows environment.\n> > Somewhat WIN32 directive can be used for switching code, but I agree your claims.\n> \n> This thread looks like discussing about unix-domain socket on\n> Windows. (I'll look into it.)\n\nYes, I forgot to past the URL as ususal:(\n\nhttps://www.postgresql.org/message-id/CAA4eK1KaTu3_CTAdKON_P4FB=-uvNkviJpqYkhLFcmb8xZkk_Q@mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 09 Feb 2021 13:59:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse mistake in ecpg connect string" }, { "msg_contents": "Hi, Horiguchi-san\n\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> How about the attached?\n\nI think, this patch is good.\n\n> > > Should we allow \"::1\" here as well? On the other hand, colons are\n> > > already overloaded in this syntax, so maybe allowing them in the\n> > > host part is a bad idea.\n\n> Yeah, that made me smile for the same reason:p\n\nIt seems that ecpg cannot parse the connect str with ipv6 correctly.\n\nSuch as:\nEXEC SQL CONNECT TO 'tcp:postgresql://::1:5432/postgres' \nconnect to the server successfully, but \nEXEC SQL CONNECT TO 'tcp:postgresql://::1/postgres'\nfailed to connect to server.\n\nAnd ecpg will always wrong when parse a connect str \nEXEC SQL CONNECT TO tcp:postgresql://::1:5432/postgres;\nEcpg error :\n\ta.pgc:16: ERROR: syntax error at or near \"::\"\n\nMaybe we should support ipv6 like libpq.\nIn [1],\n> The host part may be either host name or an IP address. To specify an IPv6 host address, enclose it in square brackets:\n\nHow about using square brackets like libpq, such as:\nEXEC SQL CONNECT TO 'tcp:postgresql://[::1]/postgres'\n\nMaybe we can create a new thread to talk about how ecpg support ipv6\n\n[1] https://www.postgresql.org/docs/13/libpq-connect.html#LIBPQ-CONNSTRING\n\nBest regards\nShenhao Wang\n\n\n\n\n\n", "msg_date": "Tue, 9 Feb 2021 06:56:01 +0000", "msg_from": "\"Wang, Shenhao\" <wangsh.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: parse mistake in ecpg connect string" }, { "msg_contents": "Dear Wang, Horiguchi-san,\n\n> > How about the attached?\n> \n> I think, this patch is good.\n\nI agree. The backward compatibility is violated in the doc, but maybe no one take care.\n\n> Maybe we can create a new thread to talk about how ecpg support ipv6\n\n+1\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 9 Feb 2021 07:38:17 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: parse mistake in ecpg connect string" }, { "msg_contents": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> writes:\n> Dear Wang, Horiguchi-san,\n>>> How about the attached?\n\n>> I think, this patch is good.\n\n> I agree. The backward compatibility is violated in the doc, but maybe no one take care.\n\nPushed with a little more work on the documentation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 15:23:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: parse mistake in ecpg connect string" }, { "msg_contents": "At Thu, 11 Feb 2021 15:23:31 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> \"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> writes:\n> > Dear Wang, Horiguchi-san,\n> >>> How about the attached?\n> \n> >> I think, this patch is good.\n> \n> > I agree. The backward compatibility is violated in the doc, but maybe no one take care.\n> \n> Pushed with a little more work on the documentation.\n\nThanks for committing this (and further update of the document).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Feb 2021 13:09:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse mistake in ecpg connect string" } ]
[ { "msg_contents": "Hello !\n\nWe encountered the following bug recently in production: when running REINDEX \nCONCURRENTLY on an index, the attstattarget is reset to 0.\n\nConsider the following example: \n\njunk=# \\d+ t1_date_trunc_idx \n Index \"public.t1_date_trunc_idx\"\n Column | Type | Key? | Definition \n| Storage | Stats target \n------------+-----------------------------+------\n+-----------------------------+---------+--------------\n date_trunc | timestamp without time zone | yes | date_trunc('day'::text, ts) \n| plain | 1000\nbtree, for table \"public.t1\"\n\njunk=# REINDEX INDEX t1_date_trunc_idx;\nREINDEX\njunk=# \\d+ t1_date_trunc_idx \n Index \"public.t1_date_trunc_idx\"\n Column | Type | Key? | Definition \n| Storage | Stats target \n------------+-----------------------------+------\n+-----------------------------+---------+--------------\n date_trunc | timestamp without time zone | yes | date_trunc('day'::text, ts) \n| plain | 1000\nbtree, for table \"public.t1\"\n\njunk=# REINDEX INDEX CONCURRENTLY t1_date_trunc_idx;\nREINDEX\njunk=# \\d+ t1_date_trunc_idx \n Index \"public.t1_date_trunc_idx\"\n Column | Type | Key? | Definition \n| Storage | Stats target \n------------+-----------------------------+------\n+-----------------------------+---------+--------------\n date_trunc | timestamp without time zone | yes | date_trunc('day'::text, ts) \n| plain | \nbtree, for table \"public.t1\"\n\n\nI'm attaching a patch possibly solving the problem, but maybe the proposed \nchanges will be too intrusive ?\n\nRegards,\n\n-- \nRonan Dunklau", "msg_date": "Thu, 04 Feb 2021 11:04:38 +0100", "msg_from": "Ronan Dunklau <ronan@dunklau.fr>", "msg_from_op": true, "msg_subject": "Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On 2/4/21 11:04 AM, Ronan Dunklau wrote:\n> Hello !\n> \n> ...\n> \n> junk=# REINDEX INDEX CONCURRENTLY t1_date_trunc_idx;\n> REINDEX\n> junk=# \\d+ t1_date_trunc_idx \n> Index \"public.t1_date_trunc_idx\"\n> Column | Type | Key? | Definition \n> | Storage | Stats target \n> ------------+-----------------------------+------\n> +-----------------------------+---------+--------------\n> date_trunc | timestamp without time zone | yes | date_trunc('day'::text, ts) \n> | plain | \n> btree, for table \"public.t1\"\n> \n> \n> I'm attaching a patch possibly solving the problem, but maybe the proposed \n> changes will be too intrusive ?\n> \n\nHmmm, that sure seems like a bug, or at least unexpected behavior (that\nI don't see mentioned in the docs).\n\nBut the patch seems borked in some way:\n\n$ patch -p1 < ~/keep_attstattargets_on_reindex_concurrently.patch\npatch: **** Only garbage was found in the patch input.\n\nThere seem to be strange escape characters and so on, how did you create\nthe patch? Maybe some syntax coloring, or something?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 4 Feb 2021 15:46:50 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "> \n> Hmmm, that sure seems like a bug, or at least unexpected behavior (that\n> I don't see mentioned in the docs).\n> \n> But the patch seems borked in some way:\n> \n> $ patch -p1 < ~/keep_attstattargets_on_reindex_concurrently.patch\n> patch: **** Only garbage was found in the patch input.\n> \n> There seem to be strange escape characters and so on, how did you \n> create\n> the patch? Maybe some syntax coloring, or something?\n\nYou're right, I had syntax coloring in the output, sorry.\n\nPlease find attached a correct patch.\n\nRegards,\n\n--\nRonan Dunklau", "msg_date": "Thu, 04 Feb 2021 15:52:44 +0100", "msg_from": "Ronan Dunklau <ronan@dunklau.fr>", "msg_from_op": true, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On Thu, Feb 04, 2021 at 03:52:44PM +0100, Ronan Dunklau wrote:\n>> Hmmm, that sure seems like a bug, or at least unexpected behavior (that\n>> I don't see mentioned in the docs).\n\nYeah, per the rule of consistency, this classifies as a bug to me.\n\n> Please find attached a correct patch.\n\nConstructTupleDescriptor() does not matter much, but this patch is not\nacceptable to me as it touches the area of the index creation while\nstatistics on an index expression can only be changed with a special\nflavor of ALTER INDEX with column numbers. This would imply an ABI\nbreakage, so it cannot be backpatched as-is.\n\nLet's copy this data in index_concurrently_swap() instead. The\nattached patch does that, and adds a test cheaper than what was\nproposed. There is a minor release planned for next week, so I may be\nbetter to wait after that so as we have enough time to agree on a\nsolution.\n--\nMichael", "msg_date": "Fri, 5 Feb 2021 11:17:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "Le vendredi 5 février 2021, 03:17:48 CET Michael Paquier a écrit :\n> ConstructTupleDescriptor() does not matter much, but this patch is not\n> acceptable to me as it touches the area of the index creation while\n> statistics on an index expression can only be changed with a special\n> flavor of ALTER INDEX with column numbers. This would imply an ABI\n> breakage, so it cannot be backpatched as-is.\n\nI'm not surprised by this answer, the good news is it's being back-patched. \n\n> \n> Let's copy this data in index_concurrently_swap() instead. The\n> attached patch does that, and adds a test cheaper than what was\n> proposed. There is a minor release planned for next week, so I may be\n> better to wait after that so as we have enough time to agree on a\n> solution.\n\nLooks good to me ! Thank you.\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Fri, 05 Feb 2021 08:22:17 +0100", "msg_from": "Ronan Dunklau <ronan@dunklau.fr>", "msg_from_op": true, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On Fri, Feb 05, 2021 at 08:22:17AM +0100, Ronan Dunklau wrote:\n> I'm not surprised by this answer, the good news is it's being back-patched. \n\nYes, I have no problem with that. Until this gets fixed, the damage\ncan be limited with an extra ALTER INDEX, that takes a\nShareUpdateExclusiveLock so there is no impact on the concurrent\nactivity.\n\n> Looks good to me ! Thank you.\n\nThanks for looking at it. Tomas, do you have any comments?\n--\nMichael", "msg_date": "Fri, 5 Feb 2021 16:43:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On 2/5/21 8:43 AM, Michael Paquier wrote:\n> On Fri, Feb 05, 2021 at 08:22:17AM +0100, Ronan Dunklau wrote:\n>> I'm not surprised by this answer, the good news is it's being back-patched.\n> \n> Yes, I have no problem with that. Until this gets fixed, the damage\n> can be limited with an extra ALTER INDEX, that takes a\n> ShareUpdateExclusiveLock so there is no impact on the concurrent\n> activity.\n> \n>> Looks good to me ! Thank you.\n> \n> Thanks for looking at it. Tomas, do you have any comments?\n> --\n\nNot really.\n\nCopying this info in index_concurrently_swap seems a bit strange - we're \ncopying other stuff there, but this is modifying something we've already \ncopied before. I understand why we do it there to make this \nbackpatchable, but maybe it'd be good to mention this in a comment (or \nat least the commit message). We could do this in the backbranches only \nand the \"correct\" way in master, but that does not seem worth it.\n\nOne minor comment - the code says this:\n\n /* no need for a refresh if both match */\n if (attstattarget == att->attstattarget)\n continue;\n\nIsn't that just a different way to say \"attstattarget is not default\")?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 6 Feb 2021 22:39:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On Sat, Feb 06, 2021 at 10:39:53PM +0100, Tomas Vondra wrote:\n> Copying this info in index_concurrently_swap seems a bit strange - we're\n> copying other stuff there, but this is modifying something we've already\n> copied before. I understand why we do it there to make this backpatchable,\n> but maybe it'd be good to mention this in a comment (or at least the commit\n> message). We could do this in the backbranches only and the \"correct\" way in\n> master, but that does not seem worth it.\n\nThanks.\n\n> One minor comment - the code says this:\n> \n> /* no need for a refresh if both match */\n> if (attstattarget == att->attstattarget)\n> continue;\n> \n> Isn't that just a different way to say \"attstattarget is not default\")?\n\nFor REINDEX CONCURRENTLY, yes. I was thinking here about the case\nwhere this code is used for other purposes in the future, where\nattstattarget may not be -1.\n\nI'll see about applying this stuff after the next version is tagged\nthen.\n--\nMichael", "msg_date": "Sun, 7 Feb 2021 09:39:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On Sun, Feb 07, 2021 at 09:39:36AM +0900, Michael Paquier wrote:\n> I'll see about applying this stuff after the next version is tagged\n> then.\n\nThe new versions have been tagged, so done as of bd12080 and\nback-patched. I have added a note in the commit log about the\napproach to use index_create() instead for HEAD.\n--\nMichael", "msg_date": "Wed, 10 Feb 2021 13:35:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On Fri, Feb 05, 2021 at 11:17:48AM +0900, Michael Paquier wrote:\n> Let's copy this data in index_concurrently_swap() instead. The\n> attached patch does that, and adds a test cheaper than what was\n> proposed. There is a minor release planned for next week, so I may be\n\n> +++ b/src/test/regress/sql/create_index.sql\n> @@ -1103,6 +1104,13 @@ SELECT starelid::regclass, count(*) FROM pg_statistic WHERE starelid IN (\n> 'concur_exprs_index_pred'::regclass,\n> 'concur_exprs_index_pred_2'::regclass)\n> GROUP BY starelid ORDER BY starelid::regclass::text;\n> +-- attstattarget should remain intact\n> +SELECT attrelid::regclass, attnum, attstattarget\n> + FROM pg_attribute WHERE attrelid IN (\n> + 'concur_exprs_index_expr'::regclass,\n> + 'concur_exprs_index_pred'::regclass,\n> + 'concur_exprs_index_pred_2'::regclass)\n> + ORDER BY 'concur_exprs_index_expr'::regclass::text, attnum;\n\nIf I'm not wrong, you meant to ORDER BY attrelid::regclass::text, attnum;\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 10 Feb 2021 00:58:05 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" }, { "msg_contents": "On Wed, Feb 10, 2021 at 12:58:05AM -0600, Justin Pryzby wrote:\n> If I'm not wrong, you meant to ORDER BY attrelid::regclass::text, attnum;\n\nIndeed, I meant that. Thanks, Justin!\n--\nMichael", "msg_date": "Wed, 10 Feb 2021 17:04:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Preserve attstattarget on REINDEX CONCURRENTLY" } ]
[ { "msg_contents": "hi,\n\ni tested the temporal patch ( https://commitfest.postgresql.org/26/2316/ ) with the current 14devel applied ontop of ef3d461 without any conflicts.\ni build with no special options passed to ./configure and noticed, that the postgresql-client-13 from the debian repositories crashes with the \\d command\n\nto reproduce the issue:\n\n CREATE TABLE test (\n id int PRIMARY KEY generated ALWAYS AS IDENTITY,\n name text NOT NULL,\n start_timestamp timestamp with time zone GENERATED ALWAYS AS ROW START,\n end_timestamp timestamp with time zone GENERATED ALWAYS AS ROW END,\n PERIOD FOR SYSTEM_TIME (start_timestamp, end_timestamp)\n );\n\n \\d test\n\nit failes after outputting the table informations with this backtrace:\n\n free(): invalid pointer\n [1] 587783 abort (core dumped) psql -X -U easteregg -h localhost postgres\n\n (gdb) bt 50\n #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n #1 0x00007f21a62e0537 in __GI_abort () at abort.c:79\n #2 0x00007f21a6339768 in __libc_message (action=action@entry=do_abort, fmt=fmt@entry=0x7f21a6447e31 \"%s\\n\") at ../sysdeps/posix/libc_fatal.c:155\n #3 0x00007f21a6340a5a in malloc_printerr (str=str@entry=0x7f21a644605e \"free(): invalid pointer\") at malloc.c:5347\n #4 0x00007f21a6341c14 in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4173\n #5 0x000055c9fa47b602 in printTableCleanup (content=content@entry=0x7ffece7e41c0) at ./build/../src/fe_utils/print.c:3250\n #6 0x000055c9fa444aa3 in describeOneTableDetails (schemaname=<optimized out>, schemaname@entry=0x55c9fbebfee6 \"public\", relationname=<optimized out>, oid=oid@entry=0x55c9fbebfee0 \"16436\", verbose=verbose@entry=false) at ./build/../src/bin/psql/describe.c:3337\n #7 0x000055c9fa4490c9 in describeTableDetails (pattern=pattern@entry=0x55c9fbebf540 \"abk\", verbose=verbose@entry=false, showSystem=<optimized out>) at ./build/../src/bin/psql/describe.c:1421\n #8 0x000055c9fa4372ff in exec_command_d (scan_state=scan_state@entry=0x55c9fbebd130, active_branch=active_branch@entry=true, cmd=cmd@entry=0x55c9fbebf430 \"d\") at ./build/../src/bin/psql/command.c:722\n #9 0x000055c9fa43ae2b in exec_command (previous_buf=0x55c9fbebd3a0, query_buf=0x55c9fbebd270, cstack=0x55c9fbebd250, scan_state=0x55c9fbebd130, cmd=0x55c9fbebf430 \"d\") at ./build/../src/bin/psql/command.c:317\n #10 HandleSlashCmds (scan_state=scan_state@entry=0x55c9fbebd130, cstack=cstack@entry=0x55c9fbebd250, query_buf=0x55c9fbebd270, previous_buf=0x55c9fbebd3a0) at ./build/../src/bin/psql/command.c:220\n #11 0x000055c9fa4539e0 in MainLoop (source=0x7f21a6479980 <_IO_2_1_stdin_>) at ./build/../src/bin/psql/mainloop.c:502\n #12 0x000055c9fa433d64 in main (argc=<optimized out>, argv=0x7ffece7e47f8) at ./build/../src/bin/psql/startup.c:441\n\nthe client is this version:\n\n apt-cache policy postgresql-client-13\n postgresql-client-13:\n Installed: 13.1-1.pgdg+2+b3\n Candidate: 13.1-1.pgdg+2+b3\n Version table:\n *** 13.1-1.pgdg+2+b3 100\n 100 http://apt.postgresql.org/pub/repos/apt sid-pgdg-testing/main amd64 Packages\n 100 /var/lib/dpkg/status\n\nthe the 14devel version from my build or a selfcompiled REL_13_STABLE client will not crash.\ni was wondering if this might pose a security concern.\n\n\ni am a bit out of my depths here, but would be glad to help, if any informations are missing\nwith kind regards, \nrichard\n\n\n", "msg_date": "Thu, 04 Feb 2021 13:43:11 +0100", "msg_from": "easteregg@verfriemelt.org", "msg_from_op": true, "msg_subject": "RE: WIP: System Versioned Temporal Table" } ]
[ { "msg_contents": "Hi,\n\nWhile looking at the proposed removal of the v2 protocol, I noticed that we\nitalicize some, but not all, instances of 'per se', 'pro forma', and 'ad\nhoc'. I'd say these are widespread enough in formal registers of English\nthat they hardly need to be called out as foreign, so I propose removing\nthe tags for those words. Alternatively, we could just add tags to make\nexisting usage consistent, but I have little reason to think it will stay\nthat way. It's also impractical to go and search for other possible words\nthat should have been italicized but weren't.\n\nThe other case is 'voilà', found in rules.sgml. The case for italics here\nis stronger, but looking at that file, I actually think a more\ngeneric-sounding phrase here would be preferable.\n\nOther opinions?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nHi,While looking at the proposed removal of the v2 protocol, I noticed that we italicize some, but not all, instances of 'per se', 'pro forma', and 'ad hoc'. I'd say these are widespread enough in formal registers of English that they hardly need to be called out as foreign, so I propose removing the tags for those words. Alternatively, we could just add tags to make existing usage consistent, but I have little reason to think it will stay that way. It's also impractical to go and search for other possible words that should have been italicized but weren't.The other case is 'voilà', found in rules.sgml. The case for italics here is stronger, but looking at that file, I actually think a more generic-sounding phrase here would be preferable.Other opinions?-- John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Feb 2021 11:02:08 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "get rid of <foreignphrase> tags in the docs?" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> While looking at the proposed removal of the v2 protocol, I noticed that we\n> italicize some, but not all, instances of 'per se', 'pro forma', and 'ad\n> hoc'. I'd say these are widespread enough in formal registers of English\n> that they hardly need to be called out as foreign, so I propose removing\n> the tags for those words.\n\n+1, nobody italicizes those in normal usage.\n\n> The other case is 'voilà', found in rules.sgml. The case for italics here\n> is stronger, but looking at that file, I actually think a more\n> generic-sounding phrase here would be preferable.\n\nYeah, seeing that we only use that in one place, I think we could do\nwithout it. Looks like something as pedestrian as \"The results are:\"\nwould do fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Feb 2021 10:31:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: get rid of <foreignphrase> tags in the docs?" }, { "msg_contents": "On Thu, Feb 4, 2021 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > While looking at the proposed removal of the v2 protocol, I noticed\nthat we\n> > italicize some, but not all, instances of 'per se', 'pro forma', and 'ad\n> > hoc'. I'd say these are widespread enough in formal registers of English\n> > that they hardly need to be called out as foreign, so I propose removing\n> > the tags for those words.\n>\n> +1, nobody italicizes those in normal usage.\n\nNow that protocol v2 is gone, here's a patch to remove those tags.\n\n> > The other case is 'voilà', found in rules.sgml. The case for italics\nhere\n> > is stronger, but looking at that file, I actually think a more\n> > generic-sounding phrase here would be preferable.\n>\n> Yeah, seeing that we only use that in one place, I think we could do\n> without it. Looks like something as pedestrian as \"The results are:\"\n> would do fine.\n\nDone that way.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Mar 2021 09:47:35 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: get rid of <foreignphrase> tags in the docs?" }, { "msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, Feb 4, 2021 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +1, nobody italicizes those in normal usage.\n\n> Now that protocol v2 is gone, here's a patch to remove those tags.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Mar 2021 12:39:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: get rid of <foreignphrase> tags in the docs?" } ]
[ { "msg_contents": "Greetings\n\nThis morning I was overcome by an urge to create a database with a specific\nlocale, and my insufficiently caffeinated brain reminded me there was a\nhandy\nLOCALE option added not so long ago, but was confused by its stubborn\nabsence\nfrom the list of tab completion options presented by psql on a current HEAD\nbuild, no matter how much I mashed the tab key.\n\nFurther investigation confirmed the LOCALE option was added in PostgreSQL 13\n(commit 06140c20) but neither commit [1] nor discussion [2] mention psql.\nTrivialest of trivial patches attached (will add to next CF).\n\n[1]\nhttps://git.postgresql.org/pg/commitdiff/06140c201b982436974d71e756d7331767a41e57\n[2]\nhttps://www.postgresql.org/message-id/flat/d9d5043a-dc70-da8a-0166-1e218e6e34d4%402ndquadrant.com\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Feb 2021 10:40:12 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "psql tab completion for CREATE DATABASE ... LOCALE" }, { "msg_contents": "On Fri, Feb 5, 2021 at 2:40 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> Trivialest of trivial patches attached (will add to next CF).\n\nThanks, pushed.\n\n\n", "msg_date": "Fri, 5 Feb 2021 15:51:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tab completion for CREATE DATABASE ... LOCALE" }, { "msg_contents": "2021年2月5日(金) 11:51 Thomas Munro <thomas.munro@gmail.com>:\n\n> On Fri, Feb 5, 2021 at 2:40 PM Ian Lawrence Barwick <barwick@gmail.com>\n> wrote:\n> > Trivialest of trivial patches attached (will add to next CF).\n>\n> Thanks, pushed.\n>\n\nOh, that was quick, thanks! I hadn't even got round to adding it to the CF.\n\n\nRegards\n\nIan Barwick\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年2月5日(金) 11:51 Thomas Munro <thomas.munro@gmail.com>:On Fri, Feb 5, 2021 at 2:40 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> Trivialest of trivial patches attached (will add to next CF).\n\nThanks, pushed.\nOh, that was quick, thanks! I hadn't even got round to adding it to the CF.RegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Feb 2021 11:57:17 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: psql tab completion for CREATE DATABASE ... LOCALE" } ]
[ { "msg_contents": "Hi. When running a query with a not exist table in SPI_exec. the process exit with -1 in SPI_exec function.the error code SPI_ERROR_REL_NOT_FOUND never return. I made a minimal reproduction code in https://github.com/Sasasu/worker_spi_table_not_exist The core code is:    int spi = SPI_exec(\"select * from not_exist_table\", 0);   // can not reach here   Assert(spi == SPI_ERROR_REL_NOT_FOUND); I think it is a bug, PG_TRY macro it not mentioned in SPI document.The code inside SPI should be wrapped with PG_TRY macro.\n", "msg_date": "Fri, 05 Feb 2021 10:38:17 +0800", "msg_from": "sasa su <i@sasa.su>", "msg_from_op": true, "msg_subject": "SPI: process exit in SPI_exec when table not exist. error code not\n return." }, { "msg_contents": "2021年2月5日(金) 11:38 sasa su <i@sasa.su>:\n\n> Hi.\n>\n> When running a query with a not exist table in SPI_exec. the process exit\n> with -1 in SPI_exec function.\n> the error code SPI_ERROR_REL_NOT_FOUND never return.\n>\n> I made a minimal reproduction code in\n> https://github.com/Sasasu/worker_spi_table_not_exist\n>\n> The core code is:\n>\n> int spi = SPI_exec(\"select * from not_exist_table\", 0);\n> // can not reach here\n> Assert(spi == SPI_ERROR_REL_NOT_FOUND);\n>\n\n\nThe list of return codes returned by SPI_exec is here:\n\n https://www.postgresql.org/docs/current/spi-spi-execute.html\n\nand doesn't include \"SPI_ERROR_REL_NOT_FOUND\". The only\nfunction which does return that is SPI_unregister_relation, see:\n\n https://www.postgresql.org/docs/current/spi-spi-unregister-relation.html\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年2月5日(金) 11:38 sasa su <i@sasa.su>:Hi. When running a query with a not exist table in SPI_exec. the process exit with -1 in SPI_exec function.the error code SPI_ERROR_REL_NOT_FOUND never return. I made a minimal reproduction code in https://github.com/Sasasu/worker_spi_table_not_exist The core code is:    int spi = SPI_exec(\"select * from not_exist_table\", 0);   // can not reach here   Assert(spi == SPI_ERROR_REL_NOT_FOUND); The list of return codes returned by SPI_exec is here:   https://www.postgresql.org/docs/current/spi-spi-execute.htmland doesn't include \"SPI_ERROR_REL_NOT_FOUND\". The onlyfunction which does return that is SPI_unregister_relation, see:  https://www.postgresql.org/docs/current/spi-spi-unregister-relation.htmlRegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Feb 2021 11:55:10 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SPI: process exit in SPI_exec when table not exist. error code\n not return." }, { "msg_contents": "Thanks barwick, Is the process crash also expected, And dose anyone considered enhancing this API? 05.02.2021, 10:55, \"Ian Lawrence Barwick\" <barwick@gmail.com>:2021年2月5日(金) 11:38 sasa su <i@sasa.su>:Hi. When running a query with a not exist table in SPI_exec. the process exit with -1 in SPI_exec function.the error code SPI_ERROR_REL_NOT_FOUND never return. I made a minimal reproduction code in https://github.com/Sasasu/worker_spi_table_not_exist The core code is:    int spi = SPI_exec(\"select * from not_exist_table\", 0);   // can not reach here   Assert(spi == SPI_ERROR_REL_NOT_FOUND);  The list of return codes returned by SPI_exec is here:    https://www.postgresql.org/docs/current/spi-spi-execute.html and doesn't include \"SPI_ERROR_REL_NOT_FOUND\". The onlyfunction which does return that is SPI_unregister_relation, see:   https://www.postgresql.org/docs/current/spi-spi-unregister-relation.html  Regards Ian Barwick--EnterpriseDB: https://www.enterprisedb.com ", "msg_date": "Fri, 05 Feb 2021 11:15:07 +0800", "msg_from": "sasa su <i@sasa.su>", "msg_from_op": true, "msg_subject": "Re: SPI: process exit in SPI_exec when table not exist. error code\n not return." } ]
[ { "msg_contents": "Hi,\n\nThe following is written in the comments of PQputCopyEnd().\n\n (snip)\n * Returns 1 if successful, 0 if data could not be sent (only possible\n * in nonblock mode), or -1 if an error occurs.\n (snip)\n\nThe PQputCopyEnd() section of the manual (libpq.sgml) describes the following.\n\n The result is 1 if the termination message was sent; or in\n nonblocking mode, this may only indicate that the termination\n message was successfully queued. (In nonblocking mode, to be\n certain that the data has been sent, you should next wait for\n write-ready and call <xref linkend=\"libpq-PQflush\"/>, repeating until it\n returns zero.) Zero indicates that the function could not queue\n the termination message because of full buffers; this will only\n happen in nonblocking mode. (In this case, wait for\n write-ready and try the <xref linkend=\"libpq-PQputCopyEnd\"/> call\n again.) If a hard error occurs, -1 is returned; you can use\n <xref linkend=\"libpq-PQerrorMessage\"/> to retrieve details.\n\n\nThese says that 0 may be returned if a non-blocking mode is used, but\nthere doesn't seem to be any case where 0 is returned in the code of\nPQputCopyEnd().\n\nI may have missed something, but is it a mistake in the comments or\ndocumentation?\n\nOr should it return 0 when sending a COPY exit message fails\nin non-blocking mode, like this?\n\n@@ -2370,7 +2370,7 @@ PQputCopyEnd(PGconn *conn, const char *errormsg)\n /* Send COPY DONE */\n if (pqPutMsgStart('c', false, conn) < 0 ||\n pqPutMsgEnd(conn) < 0)\n- return -1;\n+ return pqIsnonblocking(conn) ? 0 : -1;\n }\n\n /*\n@@ -2399,7 +2399,7 @@ PQputCopyEnd(PGconn *conn, const char *errormsg)\n if (pqPutMsgStart(0, false, conn) < 0 ||\n pqPutnchar(\"\\\\.\\n\", 3, conn) < 0 ||\n pqPutMsgEnd(conn) < 0)\n- return -1;\n+ return pqIsnonblocking(conn) ? 0 : -1;\n }\n }\n\n\nBest regards,\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 16:52:53 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": true, "msg_subject": "There doesn't seem to be any case where PQputCopyEnd() returns 0" }, { "msg_contents": "\n\nOn 2021/02/05 16:52, Kasahara Tatsuhito wrote:\n> Hi,\n> \n> The following is written in the comments of PQputCopyEnd().\n> \n> (snip)\n> * Returns 1 if successful, 0 if data could not be sent (only possible\n> * in nonblock mode), or -1 if an error occurs.\n> (snip)\n> \n> The PQputCopyEnd() section of the manual (libpq.sgml) describes the following.\n> \n> The result is 1 if the termination message was sent; or in\n> nonblocking mode, this may only indicate that the termination\n> message was successfully queued. (In nonblocking mode, to be\n> certain that the data has been sent, you should next wait for\n> write-ready and call <xref linkend=\"libpq-PQflush\"/>, repeating until it\n> returns zero.) Zero indicates that the function could not queue\n> the termination message because of full buffers; this will only\n> happen in nonblocking mode. (In this case, wait for\n> write-ready and try the <xref linkend=\"libpq-PQputCopyEnd\"/> call\n> again.) If a hard error occurs, -1 is returned; you can use\n> <xref linkend=\"libpq-PQerrorMessage\"/> to retrieve details.\n> \n> \n> These says that 0 may be returned if a non-blocking mode is used, but\n> there doesn't seem to be any case where 0 is returned in the code of\n> PQputCopyEnd().\n\nI found the past discussion [1] about this issue.\n\n[1]\nhttps://www.postgresql.org/message-id/CA+Tgmobjj+0modbnmjy7ezeBFOBo9d2mAVcSPkzLx4LtZmc==g@mail.gmail.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 5 Feb 2021 23:01:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: There doesn't seem to be any case where PQputCopyEnd() returns 0" }, { "msg_contents": "On Fri, Feb 5, 2021 at 11:01 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/02/05 16:52, Kasahara Tatsuhito wrote:\n> > Hi,\n> >\n> > The following is written in the comments of PQputCopyEnd().\n> >\n> > (snip)\n> > * Returns 1 if successful, 0 if data could not be sent (only possible\n> > * in nonblock mode), or -1 if an error occurs.\n> > (snip)\n> >\n> > The PQputCopyEnd() section of the manual (libpq.sgml) describes the following.\n> >\n> > The result is 1 if the termination message was sent; or in\n> > nonblocking mode, this may only indicate that the termination\n> > message was successfully queued. (In nonblocking mode, to be\n> > certain that the data has been sent, you should next wait for\n> > write-ready and call <xref linkend=\"libpq-PQflush\"/>, repeating until it\n> > returns zero.) Zero indicates that the function could not queue\n> > the termination message because of full buffers; this will only\n> > happen in nonblocking mode. (In this case, wait for\n> > write-ready and try the <xref linkend=\"libpq-PQputCopyEnd\"/> call\n> > again.) If a hard error occurs, -1 is returned; you can use\n> > <xref linkend=\"libpq-PQerrorMessage\"/> to retrieve details.\n> >\n> >\n> > These says that 0 may be returned if a non-blocking mode is used, but\n> > there doesn't seem to be any case where 0 is returned in the code of\n> > PQputCopyEnd().\n>\n> I found the past discussion [1] about this issue.\n>\n> [1]\n> https://www.postgresql.org/message-id/CA+Tgmobjj+0modbnmjy7ezeBFOBo9d2mAVcSPkzLx4LtZmc==g@mail.gmail.com\nOh, thank you.\nI understood what was unclear.\n\nBest regards,\n\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n\n-- \nTatsuhito Kasahara\nkasahara.tatsuhito _at_ gmail.com\n\n\n", "msg_date": "Sun, 7 Feb 2021 11:02:21 +0900", "msg_from": "Kasahara Tatsuhito <kasahara.tatsuhito@gmail.com>", "msg_from_op": true, "msg_subject": "Re: There doesn't seem to be any case where PQputCopyEnd() returns 0" } ]
[ { "msg_contents": "Hi,\n\nI've been mucking around with COPY FROM lately, and to test it, I wrote \nsome tools to generate input files and load them with COPY FROM:\n\nhttps://github.com/hlinnaka/pgcopyfuzz\n\nI used a fuzz testing tool called honggfuzz [1] to generate test inputs \nfor COPY FROM. At first I tried to use afl and libfuzzer, but honggfuzz \nwas much easier to use with PostgreSQL. It has a \"persistent fuzzing \nmode\", which allows starting the server normally (well, in single-user \nmode), and calling a function to get the next input. With the other \nfuzzers I tried, you have to provide a callback function that the fuzzer \ncalls for each test iteration, and that was hard to integrate into the \nPostgreSQL main processing loop.\n\nI ran it for about 2 h on my laptop with the patch I was working on [2]. \nIt didn't find any crashes, but it generated about 1300 input files that \nit considered \"interesting\" based on code coverage analysis. When I took \nthose generated inputs, and ran them against unpatched and patched \nserver, some inputs produced different results. So that revealed a \ncouple of bugs in the patch. (I'll post a fixed patched version on that \nthread soon.)\n\nI hope others find this useful, too.\n\n[1] https://github.com/google/honggfuzz\n[2] \nhttps://www.postgresql.org/message-id/11d39e63-b80a-5f8d-8043-fff04201fadc@iki.fi\n\n- Heikki\n\n\n", "msg_date": "Fri, 5 Feb 2021 12:45:30 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Fuzz testing COPY FROM parsing" }, { "msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> I've been mucking around with COPY FROM lately, and to test it, I wrote some\n> tools to generate input files and load them with COPY FROM:\n> \n> https://github.com/hlinnaka/pgcopyfuzz\n\nNeat!\n\n> I used a fuzz testing tool called honggfuzz [1] to generate test inputs for\n> COPY FROM. At first I tried to use afl and libfuzzer, but honggfuzz was much\n> easier to use with PostgreSQL. It has a \"persistent fuzzing mode\", which\n> allows starting the server normally (well, in single-user mode), and calling\n> a function to get the next input. With the other fuzzers I tried, you have\n> to provide a callback function that the fuzzer calls for each test\n> iteration, and that was hard to integrate into the PostgreSQL main\n> processing loop.\n\nYeah, that's been one of the challenges with fuzzers I've played with\ntoo.\n\n> I ran it for about 2 h on my laptop with the patch I was working on [2]. It\n> didn't find any crashes, but it generated about 1300 input files that it\n> considered \"interesting\" based on code coverage analysis. When I took those\n> generated inputs, and ran them against unpatched and patched server, some\n> inputs produced different results. So that revealed a couple of bugs in the\n> patch. (I'll post a fixed patched version on that thread soon.)\n> \n> I hope others find this useful, too.\n\nNice! I wonder if there's a way to have a buildfarm member or other\nsystem doing this automatically on new commits and perhaps adding\ncoverage for other things like the JSON code..\n\nThanks!\n\nStephen", "msg_date": "Fri, 5 Feb 2021 10:54:25 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Fuzz testing COPY FROM parsing" }, { "msg_contents": "\nOn 2/5/21 10:54 AM, Stephen Frost wrote:\n> Greetings,\n>\n> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>> I've been mucking around with COPY FROM lately, and to test it, I wrote some\n>> tools to generate input files and load them with COPY FROM:\n>>\n>> https://github.com/hlinnaka/pgcopyfuzz\n> Neat!\n>\n>> I used a fuzz testing tool called honggfuzz [1] to generate test inputs for\n>> COPY FROM. At first I tried to use afl and libfuzzer, but honggfuzz was much\n>> easier to use with PostgreSQL. It has a \"persistent fuzzing mode\", which\n>> allows starting the server normally (well, in single-user mode), and calling\n>> a function to get the next input. With the other fuzzers I tried, you have\n>> to provide a callback function that the fuzzer calls for each test\n>> iteration, and that was hard to integrate into the PostgreSQL main\n>> processing loop.\n> Yeah, that's been one of the challenges with fuzzers I've played with\n> too.\n>\n>> I ran it for about 2 h on my laptop with the patch I was working on [2]. It\n>> didn't find any crashes, but it generated about 1300 input files that it\n>> considered \"interesting\" based on code coverage analysis. When I took those\n>> generated inputs, and ran them against unpatched and patched server, some\n>> inputs produced different results. So that revealed a couple of bugs in the\n>> patch. (I'll post a fixed patched version on that thread soon.)\n>>\n>> I hope others find this useful, too.\n> Nice! I wonder if there's a way to have a buildfarm member or other\n> system doing this automatically on new commits and perhaps adding\n> coverage for other things like the JSON code..\n\n\nNot easily in the buildfarm as it is today. We can easily create modules\nfor extensions and other things that don't require modification of core\ncode, but things that require patching core code are a whole different\nstory.\n\nThat's not to say it couldn't be done, a SMOP. But using something like\nAppveyor or Cirrus might be a lot simpler.\n\n\ncheers\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 5 Feb 2021 14:16:44 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fuzz testing COPY FROM parsing" }, { "msg_contents": "On 05/02/2021 21:16, Andrew Dunstan wrote:\n> \n> On 2/5/21 10:54 AM, Stephen Frost wrote:\n>> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>>> I ran it for about 2 h on my laptop with the patch I was working on [2]. It\n>>> didn't find any crashes, but it generated about 1300 input files that it\n>>> considered \"interesting\" based on code coverage analysis. When I took those\n>>> generated inputs, and ran them against unpatched and patched server, some\n>>> inputs produced different results. So that revealed a couple of bugs in the\n>>> patch. (I'll post a fixed patched version on that thread soon.)\n>>>\n>>> I hope others find this useful, too.\n>> Nice! I wonder if there's a way to have a buildfarm member or other\n>> system doing this automatically on new commits and perhaps adding\n>> coverage for other things like the JSON code..\n> \n> Not easily in the buildfarm as it is today. We can easily create modules\n> for extensions and other things that don't require modification of core\n> code, but things that require patching core code are a whole different\n> story.\n\nIt might be possible to call the fuzzer's HF_ITER() function from a C \nextension instead. So you would run a query like \"SELECT \nnext_fuzz_iter()\" in a loop, and next_fuzz_iter() would be a C function \nthat calls HF_ITER(), and executes the actual query with SPI.\n\nThat said, I don't think it's important to run the fuzzer in the \nbuildfarm. It should be enough to do that every once in a while, when \nyou modify the COPY FROM code (or something else that you want to fuzz \ntest). But we could easily include the test inputs generated by the \nfuzzer in the regular tests. We've usually been very frugal in adding \ntests, though, to keep the time it takes to run all the tests short.\n\n- Heikki\n\n\n", "msg_date": "Fri, 5 Feb 2021 21:50:40 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Fuzz testing COPY FROM parsing" }, { "msg_contents": "Greetings,\n\n* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> On 05/02/2021 21:16, Andrew Dunstan wrote:\n> >On 2/5/21 10:54 AM, Stephen Frost wrote:\n> >>* Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n> >>>I ran it for about 2 h on my laptop with the patch I was working on [2]. It\n> >>>didn't find any crashes, but it generated about 1300 input files that it\n> >>>considered \"interesting\" based on code coverage analysis. When I took those\n> >>>generated inputs, and ran them against unpatched and patched server, some\n> >>>inputs produced different results. So that revealed a couple of bugs in the\n> >>>patch. (I'll post a fixed patched version on that thread soon.)\n> >>>\n> >>>I hope others find this useful, too.\n> >>Nice! I wonder if there's a way to have a buildfarm member or other\n> >>system doing this automatically on new commits and perhaps adding\n> >>coverage for other things like the JSON code..\n> >\n> >Not easily in the buildfarm as it is today. We can easily create modules\n> >for extensions and other things that don't require modification of core\n> >code, but things that require patching core code are a whole different\n> >story.\n> \n> It might be possible to call the fuzzer's HF_ITER() function from a C\n> extension instead. So you would run a query like \"SELECT next_fuzz_iter()\"\n> in a loop, and next_fuzz_iter() would be a C function that calls HF_ITER(),\n> and executes the actual query with SPI.\n\nI wonder how much we could fuzz with that approach...\n\n> That said, I don't think it's important to run the fuzzer in the buildfarm.\n> It should be enough to do that every once in a while, when you modify the\n> COPY FROM code (or something else that you want to fuzz test). But we could\n> easily include the test inputs generated by the fuzzer in the regular tests.\n> We've usually been very frugal in adding tests, though, to keep the time it\n> takes to run all the tests short.\n\nIf we could be sure that everyone who might ever modify the COPY FROM or\nJSON parser or other code that we arrange to get fuzz testing on with\nthis approach, that would be great, but I wouldn't make a bet on that\nhappening, which is why having it done (however it's done) in an\nautomated fashion would be good. Also, doing it on the buildfarm, or\nusing a CI tool, means we can allow it to run longer since it won't be\ndirectly impacting developers. I'd love to see us do more of that in\ngeneral. It's great that we have good regression tests that can be run\nfast and catch some things, but it seems likely that there'll always be\nthings that just take longer to test and having that done in an\nautomated fashion essentially 'in the background' would be great, so we\ncan get reports back and fix anything they find before release.\n\nThanks,\n\nStephen", "msg_date": "Fri, 5 Feb 2021 15:06:46 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Fuzz testing COPY FROM parsing" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> That said, I don't think it's important to run the fuzzer in the \n> buildfarm. It should be enough to do that every once in a while, when \n> you modify the COPY FROM code (or something else that you want to fuzz \n> test). But we could easily include the test inputs generated by the \n> fuzzer in the regular tests. We've usually been very frugal in adding \n> tests, though, to keep the time it takes to run all the tests short.\n\nYeah, I think there's a lot of value in the fact that it doesn't\ntake too long to run the core regression tests, or even check-world.\n\nAlso, given you mentioned that this fuzzer bases its work partly\non code examination, it seems like the right procedure would be to\nre-invoke the fuzzer after changes, not just blindly re-use the\ntest cases it made for the old code. So it seems like the thing\nwe want here is documentation or a test harness for using the\nfuzzer, but not direct incorporation of test cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Feb 2021 15:11:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fuzz testing COPY FROM parsing" }, { "msg_contents": "On Fri, Feb 05, 2021 at 12:45:30PM +0200, Heikki Linnakangas wrote:\n> Hi,\n> \n> I've been mucking around with COPY FROM lately, and to test it, I wrote some\n> tools to generate input files and load them with COPY FROM:\n> \n> https://github.com/hlinnaka/pgcopyfuzz\n\nNeat!\n\nThe way it's already produced results is impressive.\n\nLooking at honggfuzz, I see it's been used for wire protocols, of\nwhich we have several. Does testing our wire protocols seem like a\nbig lift?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 6 Feb 2021 02:40:52 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: Fuzz testing COPY FROM parsing" }, { "msg_contents": "\nOn 2/5/21 2:50 PM, Heikki Linnakangas wrote:\n> On 05/02/2021 21:16, Andrew Dunstan wrote:\n>>\n>> On 2/5/21 10:54 AM, Stephen Frost wrote:\n>>> * Heikki Linnakangas (hlinnaka@iki.fi) wrote:\n>>>> I ran it for about 2 h on my laptop with the patch I was working on\n>>>> [2]. It\n>>>> didn't find any crashes, but it generated about 1300 input files\n>>>> that it\n>>>> considered \"interesting\" based on code coverage analysis. When I\n>>>> took those\n>>>> generated inputs, and ran them against unpatched and patched\n>>>> server, some\n>>>> inputs produced different results. So that revealed a couple of\n>>>> bugs in the\n>>>> patch. (I'll post a fixed patched version on that thread soon.)\n>>>>\n>>>> I hope others find this useful, too.\n>>> Nice!� I wonder if there's a way to have a buildfarm member or other\n>>> system doing this automatically on new commits and perhaps adding\n>>> coverage for other things like the JSON code..\n>>\n>> Not easily in the buildfarm as it is today. We can easily create modules\n>> for extensions and other things that don't require modification of core\n>> code, but things that require patching core code are a whole different\n>> story.\n>\n> It might be possible to call the fuzzer's HF_ITER() function from a C\n> extension instead. So you would run a query like \"SELECT\n> next_fuzz_iter()\" in a loop, and next_fuzz_iter() would be a C\n> function that calls HF_ITER(), and executes the actual query with SPI.\n>\n> That said, I don't think it's important to run the fuzzer in the\n> buildfarm. It should be enough to do that every once in a while, when\n> you modify the COPY FROM code (or something else that you want to fuzz\n> test). But we could easily include the test inputs generated by the\n> fuzzer in the regular tests. We've usually been very frugal in adding\n> tests, though, to keep the time it takes to run all the tests short.\n>\n>\n\nThis strikes me as a better design in any case.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 6 Feb 2021 08:28:48 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Fuzz testing COPY FROM parsing" } ]
[ { "msg_contents": "Hi,\n\npg_terminate_backend and pg_cancel_backend with postmaster PID produce\n\"PID XXXX is not a PostgresSQL server process\" warning [1], which\nbasically implies that the postmaster is not a PostgreSQL process at\nall. This is a bit misleading because the postmaster is the parent of\nall PostgreSQL processes. Should we improve the warning message if the\ngiven PID is postmasters' PID?\n\nIf yes, how about a generic message for both of the functions -\n\"signalling postmaster process is not allowed\" or \"cannot signal\npostmaster process\" or some other better suggestion?\n\n[1] 2471176 ---> is postmaster PID.\npostgres=# select pg_terminate_backend(2471176);\nWARNING: PID 2471176 is not a PostgreSQL server process\n pg_terminate_backend\n----------------------\n f\n(1 row)\npostgres=# select pg_cancel_backend(2471176);\nWARNING: PID 2471176 is not a PostgreSQL server process\n pg_cancel_backend\n-------------------\n f\n(1 row)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Feb 2021 17:15:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Fri, Feb 5, 2021 at 5:15 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> pg_terminate_backend and pg_cancel_backend with postmaster PID produce\n> \"PID XXXX is not a PostgresSQL server process\" warning [1], which\n> basically implies that the postmaster is not a PostgreSQL process at\n> all. This is a bit misleading because the postmaster is the parent of\n> all PostgreSQL processes. Should we improve the warning message if the\n> given PID is postmasters' PID?\n>\n> If yes, how about a generic message for both of the functions -\n> \"signalling postmaster process is not allowed\" or \"cannot signal\n> postmaster process\" or some other better suggestion?\n>\n> [1] 2471176 ---> is postmaster PID.\n> postgres=# select pg_terminate_backend(2471176);\n> WARNING: PID 2471176 is not a PostgreSQL server process\n> pg_terminate_backend\n> ----------------------\n> f\n> (1 row)\n> postgres=# select pg_cancel_backend(2471176);\n> WARNING: PID 2471176 is not a PostgreSQL server process\n> pg_cancel_backend\n> -------------------\n> f\n> (1 row)\n\nI'm attaching a small patch that emits a warning \"signalling\npostmaster with PID %d is not allowed\" for postmaster and \"signalling\nPostgreSQL server process with PID %d is not allowed\" for auxiliary\nprocesses such as checkpointer, background writer, walwriter.\n\nHowever, for stats collector and sys logger processes, we still get\n\"PID XXXXX is not a PostgreSQL server process\" warning because they\ndon't have PGPROC entries(??). So BackendPidGetProc and\nAuxiliaryPidGetProc will not help and even pg_stat_activity is not\nhaving these processes' pid.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 7 Mar 2021 15:46:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On 2021-03-07 19:16, Bharath Rupireddy wrote:\n> On Fri, Feb 5, 2021 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>> pg_terminate_backend and pg_cancel_backend with postmaster PID produce\n>> \"PID XXXX is not a PostgresSQL server process\" warning [1], which\n>> basically implies that the postmaster is not a PostgreSQL process at\n>> all. This is a bit misleading because the postmaster is the parent of\n>> all PostgreSQL processes. Should we improve the warning message if the\n>> given PID is postmasters' PID?\n\n+1. I felt it was a bit confusing when reviewing a thread[1].\n\n>> \n>> If yes, how about a generic message for both of the functions -\n>> \"signalling postmaster process is not allowed\" or \"cannot signal\n>> postmaster process\" or some other better suggestion?\n>> \n>> [1] 2471176 ---> is postmaster PID.\n>> postgres=# select pg_terminate_backend(2471176);\n>> WARNING: PID 2471176 is not a PostgreSQL server process\n>> pg_terminate_backend\n>> ----------------------\n>> f\n>> (1 row)\n>> postgres=# select pg_cancel_backend(2471176);\n>> WARNING: PID 2471176 is not a PostgreSQL server process\n>> pg_cancel_backend\n>> -------------------\n>> f\n>> (1 row)\n> \n> I'm attaching a small patch that emits a warning \"signalling\n> postmaster with PID %d is not allowed\" for postmaster and \"signalling\n> PostgreSQL server process with PID %d is not allowed\" for auxiliary\n> processes such as checkpointer, background writer, walwriter.\n> \n> However, for stats collector and sys logger processes, we still get\n> \"PID XXXXX is not a PostgreSQL server process\" warning because they\n> don't have PGPROC entries(??). So BackendPidGetProc and\n> AuxiliaryPidGetProc will not help and even pg_stat_activity is not\n> having these processes' pid.\n\nI also ran into the same problem while creating a patch in [2].\n\nI'm now wondering if changing the message to something like\n\"PID XXXX is not a PostgreSQL backend process\".\n\n\"backend process' is now defined as \"Process of an instance\nwhich acts on behalf of a client session and handles its\nrequests.\" in Appendix.\n\n\n[1] \nhttps://www.postgresql.org/message-id/CALDaNm3ZzmFS-%3Dr7oDUzj7y7BgQv%2BN06Kqyft6C3xZDoKnk_6w%40mail.gmail.com\n\n[2] \nhttps://www.postgresql.org/message-id/0271f440ac77f2a4180e0e56ebd944d1%40oss.nttdata.com\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Mar 2021 14:53:31 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Mon, Mar 15, 2021 at 11:23 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> On 2021-03-07 19:16, Bharath Rupireddy wrote:\n> > On Fri, Feb 5, 2021 at 5:15 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> pg_terminate_backend and pg_cancel_backend with postmaster PID produce\n> >> \"PID XXXX is not a PostgresSQL server process\" warning [1], which\n> >> basically implies that the postmaster is not a PostgreSQL process at\n> >> all. This is a bit misleading because the postmaster is the parent of\n> >> all PostgreSQL processes. Should we improve the warning message if the\n> >> given PID is postmasters' PID?\n>\n> +1. I felt it was a bit confusing when reviewing a thread[1].\n\nHmmm.\n\n> > I'm attaching a small patch that emits a warning \"signalling\n> > postmaster with PID %d is not allowed\" for postmaster and \"signalling\n> > PostgreSQL server process with PID %d is not allowed\" for auxiliary\n> > processes such as checkpointer, background writer, walwriter.\n> >\n> > However, for stats collector and sys logger processes, we still get\n> > \"PID XXXXX is not a PostgreSQL server process\" warning because they\n> > don't have PGPROC entries(??). So BackendPidGetProc and\n> > AuxiliaryPidGetProc will not help and even pg_stat_activity is not\n> > having these processes' pid.\n>\n> I also ran into the same problem while creating a patch in [2].\n\nI have not gone through that thread though. Is there any way we can\ndetect those child processes(stats collector, sys logger) that are\nforked by the postmaster from a backend process? Thoughts?\n\n> I'm now wondering if changing the message to something like\n> \"PID XXXX is not a PostgreSQL backend process\".\n>\n> \"backend process' is now defined as \"Process of an instance\n> which acts on behalf of a client session and handles its\n> requests.\" in Appendix.\n\nYeah, that looks good to me. IIUC, we can just change the message from\n\"PID XXXX is not a PostgreSQL server process\" to \"PID XXXX is not a\nPostgreSQL backend process\" and we don't need look for AuxiliaryProcs\nor PostmasterPid.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Mar 2021 17:21:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On 2021-03-16 20:51, Bharath Rupireddy wrote:\n> On Mon, Mar 15, 2021 at 11:23 AM torikoshia \n> <torikoshia@oss.nttdata.com> wrote:\n>> \n>> On 2021-03-07 19:16, Bharath Rupireddy wrote:\n>> > On Fri, Feb 5, 2021 at 5:15 PM Bharath Rupireddy\n>> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> >>\n>> >> pg_terminate_backend and pg_cancel_backend with postmaster PID produce\n>> >> \"PID XXXX is not a PostgresSQL server process\" warning [1], which\n>> >> basically implies that the postmaster is not a PostgreSQL process at\n>> >> all. This is a bit misleading because the postmaster is the parent of\n>> >> all PostgreSQL processes. Should we improve the warning message if the\n>> >> given PID is postmasters' PID?\n>> \n>> +1. I felt it was a bit confusing when reviewing a thread[1].\n> \n> Hmmm.\n> \n>> > I'm attaching a small patch that emits a warning \"signalling\n>> > postmaster with PID %d is not allowed\" for postmaster and \"signalling\n>> > PostgreSQL server process with PID %d is not allowed\" for auxiliary\n>> > processes such as checkpointer, background writer, walwriter.\n>> >\n>> > However, for stats collector and sys logger processes, we still get\n>> > \"PID XXXXX is not a PostgreSQL server process\" warning because they\n>> > don't have PGPROC entries(??). So BackendPidGetProc and\n>> > AuxiliaryPidGetProc will not help and even pg_stat_activity is not\n>> > having these processes' pid.\n>> \n>> I also ran into the same problem while creating a patch in [2].\n> \n> I have not gone through that thread though. Is there any way we can\n> detect those child processes(stats collector, sys logger) that are\n> forked by the postmaster from a backend process? Thoughts?\n\nI couldn't find good ways to do that, and thus I'm now wondering\njust changing the message.\n\n>> I'm now wondering if changing the message to something like\n>> \"PID XXXX is not a PostgreSQL backend process\".\n>> \n>> \"backend process' is now defined as \"Process of an instance\n>> which acts on behalf of a client session and handles its\n>> requests.\" in Appendix.\n> \n> Yeah, that looks good to me. IIUC, we can just change the message from\n> \"PID XXXX is not a PostgreSQL server process\" to \"PID XXXX is not a\n> PostgreSQL backend process\" and we don't need look for AuxiliaryProcs\n> or PostmasterPid.\n\n\nChanging log messages can affect operations, especially when people\nmonitor the log message strings, but improving \"PID XXXX is not a\nPostgreSQL server process\" does not seem to cause such problems.\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 17 Mar 2021 11:35:45 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Wed, Mar 17, 2021 at 8:05 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> > I have not gone through that thread though. Is there any way we can\n> > detect those child processes(stats collector, sys logger) that are\n> > forked by the postmaster from a backend process? Thoughts?\n>\n> I couldn't find good ways to do that, and thus I'm now wondering\n> just changing the message.\n\nChanged the message.\n\n> >> I'm now wondering if changing the message to something like\n> >> \"PID XXXX is not a PostgreSQL backend process\".\n> >>\n> >> \"backend process' is now defined as \"Process of an instance\n> >> which acts on behalf of a client session and handles its\n> >> requests.\" in Appendix.\n> >\n> > Yeah, that looks good to me. IIUC, we can just change the message from\n> > \"PID XXXX is not a PostgreSQL server process\" to \"PID XXXX is not a\n> > PostgreSQL backend process\" and we don't need look for AuxiliaryProcs\n> > or PostmasterPid.\n>\n> Changing log messages can affect operations, especially when people\n> monitor the log message strings, but improving \"PID XXXX is not a\n> PostgreSQL server process\" does not seem to cause such problems.\n\nNow the error message clearly says that the given pid not a backend\nprocess. Attaching v2 patch. Please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Mar 2021 14:44:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Sun, Mar 7, 2021 at 3:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 5, 2021 at 5:15 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > pg_terminate_backend and pg_cancel_backend with postmaster PID produce\n> > \"PID XXXX is not a PostgresSQL server process\" warning [1], which\n> > basically implies that the postmaster is not a PostgreSQL process at\n> > all. This is a bit misleading because the postmaster is the parent of\n> > all PostgreSQL processes. Should we improve the warning message if the\n> > given PID is postmasters' PID?\n> >\n> > If yes, how about a generic message for both of the functions -\n> > \"signalling postmaster process is not allowed\" or \"cannot signal\n> > postmaster process\" or some other better suggestion?\n> >\n> > [1] 2471176 ---> is postmaster PID.\n> > postgres=# select pg_terminate_backend(2471176);\n> > WARNING: PID 2471176 is not a PostgreSQL server process\n> > pg_terminate_backend\n> > ----------------------\n> > f\n> > (1 row)\n> > postgres=# select pg_cancel_backend(2471176);\n> > WARNING: PID 2471176 is not a PostgreSQL server process\n> > pg_cancel_backend\n> > -------------------\n> > f\n> > (1 row)\n>\n> I'm attaching a small patch that emits a warning \"signalling\n> postmaster with PID %d is not allowed\" for postmaster and \"signalling\n> PostgreSQL server process with PID %d is not allowed\" for auxiliary\n> processes such as checkpointer, background writer, walwriter.\n>\n> However, for stats collector and sys logger processes, we still get\n> \"PID XXXXX is not a PostgreSQL server process\" warning because they\n> don't have PGPROC entries(??). So BackendPidGetProc and\n> AuxiliaryPidGetProc will not help and even pg_stat_activity is not\n> having these processes' pid.\n\nAs there is some interest shown in this thread at [1], I'm attaching a\nnew v3 patch here. Please review it.\n\nCF entry - https://commitfest.postgresql.org/36/3411/\n\n[1] - https://www.postgresql.org/message-id/CAFiTN-sX_66svOPdix1edB_WxGj%3DWu4XouyRQrvySwCK0V8Btg%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 15 Nov 2021 12:57:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Mon, Nov 15, 2021, at 4:27 AM, Bharath Rupireddy wrote:\n> As there is some interest shown in this thread at [1], I'm attaching a\n> new v3 patch here. Please review it.\nI took a look at this patch. I have a few comments.\n\n+ ereport(WARNING,\n+ (errmsg(\"signalling postmaster with PID %d is not allowed\", pid)));\n\nI would say \"signal postmaster PID 1234 is not allowed\". It is not an\nin-progress action.\n\ns/shared-memory/shared memory/\n\nsyslogger and statistics collector don't have a procArray entry so you could\nprobably provide a new function that checks if it is an auxiliary process.\nAuxiliaryPidGetProc() does not return all auxiliary processes; syslogger and\nstatistics collector don't have a procArray entry. You can use their PIDs\n(SysLoggerPID and PgStatPID) to provide an accurate information.\n\n+ if (proc)\n+ ereport(WARNING,\n+ (errmsg(\"signalling PostgreSQL server process with PID %d is not allowed\",\n\nI would say \"signal PostgreSQL auxiliary process PID 1234 is not allowed\".\n\n+ ereport(WARNING,\n+ (errmsg(\"PID %d is not a PostgreSQL server process\", pid)));\n\nI would say \"PID 1234 is not a PostgreSQL backend process\". That's the glossary\nterminology.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Nov 15, 2021, at 4:27 AM, Bharath Rupireddy wrote:As there is some interest shown in this thread at [1], I'm attaching anew v3 patch here. Please review it.I took a look at this patch. I have a few comments.+\t\tereport(WARNING,+\t\t\t\t(errmsg(\"signalling postmaster with PID %d is not allowed\", pid)));I would say \"signal postmaster PID 1234 is not allowed\". It is not anin-progress action.s/shared-memory/shared memory/syslogger and statistics collector don't have a procArray entry so you couldprobably provide a new function that checks if it is an auxiliary process.AuxiliaryPidGetProc() does not return all auxiliary processes; syslogger andstatistics collector don't have a procArray entry. You can use their PIDs(SysLoggerPID and PgStatPID) to provide an accurate information.+\t\tif (proc)+\t\t\tereport(WARNING,+\t\t\t\t\t(errmsg(\"signalling PostgreSQL server process with PID %d is not allowed\",I would say \"signal PostgreSQL auxiliary process PID 1234 is not allowed\".+\t\t\tereport(WARNING,+\t\t\t\t\t(errmsg(\"PID %d is not a PostgreSQL server process\", pid)));I would say \"PID 1234 is not a PostgreSQL backend process\". That's the glossaryterminology.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 17 Nov 2021 15:59:59 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning\n for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Wed, Nov 17, 2021 at 03:59:59PM -0300, Euler Taveira wrote:\n> On Mon, Nov 15, 2021, at 4:27 AM, Bharath Rupireddy wrote:\n> > As there is some interest shown in this thread at [1], I'm attaching a\n> > new v3 patch here. Please review it.\n> I took a look at this patch. I have a few comments.\n> \n> + ereport(WARNING,\n> + (errmsg(\"signalling postmaster with PID %d is not allowed\", pid)));\n> \n> I would say \"signal postmaster PID 1234 is not allowed\". It is not an\n> in-progress action.\n\nIt's correct to say \"signalling ... is not allowed\", which means the same as\n\"it is not allowed to signal ...\".\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Nov 2021 13:13:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Nov 17, 2021 at 03:59:59PM -0300, Euler Taveira wrote:\n>> I took a look at this patch. I have a few comments.\n>> \n>> + ereport(WARNING,\n>> + (errmsg(\"signalling postmaster with PID %d is not allowed\", pid)));\n>> \n>> I would say \"signal postmaster PID 1234 is not allowed\". It is not an\n>> in-progress action.\n\n> It's correct to say \"signalling ... is not allowed\", which means the same as\n> \"it is not allowed to signal ...\".\n\nYeah, the grammar is fine as far as that goes. What reads awkwardly to me\nis inclusion of \"with PID %d\" in the middle of the sentence. That seems\nodd, not least because it leaves the impression that maybe it would've\nbeen okay to signal some other postmaster with a different PID.\n\nFrankly, I think the existing wording is fine and this patch adds\ncomplication without making any useful improvement. We could maybe change\n\"is not a PostgresSQL server process\" to \"is not a PostgresSQL backend\nprocess\", but I wouldn't go further than that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Nov 2021 14:37:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Thu, Nov 18, 2021 at 12:30 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Nov 15, 2021, at 4:27 AM, Bharath Rupireddy wrote:\n>\n> As there is some interest shown in this thread at [1], I'm attaching a\n> new v3 patch here. Please review it.\n>\n> I took a look at this patch. I have a few comments.\n\nThanks a lot.\n\n> s/shared-memory/shared memory/\n\nI don't think we need to change that. This comment is picked up from\nanother AuxiliaryPidGetProc call from pgstatsfuncs.c and in the core\nwe have lots of instances of the term \"shared-memory\". I think we can\nhave it as is and let's not attempt to change it here in this thread\nat least.\n\n> syslogger and statistics collector don't have a procArray entry so you could\n> probably provide a new function that checks if it is an auxiliary process.\n> AuxiliaryPidGetProc() does not return all auxiliary processes; syslogger and\n> statistics collector don't have a procArray entry. You can use their PIDs\n> (SysLoggerPID and PgStatPID) to provide an accurate information.\n>\n> + if (proc)\n> + ereport(WARNING,\n> + (errmsg(\"signalling PostgreSQL server process with PID %d is not allowed\",\n>\n> I would say \"signal PostgreSQL auxiliary process PID 1234 is not allowed\".\n\nAlthough we have defined the term auxiliary process in the glossary\nrecently, I haven't found (on a quick look) any user facing log\nmessages using the term \"auxiliary process\". And if we just say \"we\ncan't signal an auxiliary process\", it doesn't look complete (we end\nup hitting the other messages down for syslogger and stats collector).\nNote that the AuxiliaryPidGetProc doesn't return a PGPROC entry for\nsyslogger and stats collector which according to the glossary are\nauxiliary processes.\n\n> + ereport(WARNING,\n> + (errmsg(\"PID %d is not a PostgreSQL server process\", pid)));\n>\n> I would say \"PID 1234 is not a PostgreSQL backend process\". That's the glossary\n> terminology.\n\nThis looks okay as it along with the other new messages, says that the\ncalling function is allowed only to signal the backend process not the\npostmaster or the other postgresql process (auxiliary process) or the\nnon-postgres processes.\n\nThe following is what I made up in my mind after looking at other\nexisting messages, like [1] and the review comments:\nerrmsg(\"cannot send signal to postmaster %d\", pid, --> the process\nis postmaster but the caller isn't allowed to signal.\nerrmsg(\"cannot send signal to PostgreSQL server process %d\", pid,\n--> the process is a postgresql process but the caller isn't allowed\nto signal.\nerrmsg(\"PID %d is not a PostgreSQL backend process\", pid, ---> it may\nbe another postgres processes like syslogger or stats collector or\nnon-postgres process but not a backend process.\n\nThoughts?\n\n[1]\n(errmsg(\"could not send signal to process %d: %m\", pid)));\n(errmsg(\"failed to send signal to postmaster: %m\")));\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 18 Nov 2021 17:01:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Thu, Nov 18, 2021 at 5:01 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> The following is what I made up in my mind after looking at other\n> existing messages, like [1] and the review comments:\n> errmsg(\"cannot send signal to postmaster %d\", pid, --> the process\n> is postmaster but the caller isn't allowed to signal.\n> errmsg(\"cannot send signal to PostgreSQL server process %d\", pid,\n> --> the process is a postgresql process but the caller isn't allowed\n> to signal.\n> errmsg(\"PID %d is not a PostgreSQL backend process\", pid, ---> it may\n> be another postgres processes like syslogger or stats collector or\n> non-postgres process but not a backend process.\n>\n> Thoughts?\n>\n> [1]\n> (errmsg(\"could not send signal to process %d: %m\", pid)));\n> (errmsg(\"failed to send signal to postmaster: %m\")));\n\nHere's the v4 patch with the above changes, the output looks like [1].\nPlease review it further.\n\n[1]\npostgres=# select pg_terminate_backend(2407245);\nWARNING: cannot send signal to postmaster 2407245\n pg_terminate_backend\n----------------------\n f\n(1 row)\n\npostgres=# select pg_terminate_backend(2407246);\nWARNING: cannot send signal to PostgreSQL server process 2407246\n pg_terminate_backend\n----------------------\n f\n(1 row)\n\npostgres=# select pg_terminate_backend(2407286);\nWARNING: PID 2407286 is not a PostgreSQL backend process\n pg_terminate_backend\n----------------------\n f\n(1 row)\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 19 Nov 2021 09:54:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On 11/18/21, 8:27 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> Here's the v4 patch with the above changes, the output looks like [1].\r\n> Please review it further.\r\n\r\nI agree with Tom. I would just s/server/backend/ (as per the\r\nattached) and call it a day.\r\n\r\nNathan", "msg_date": "Tue, 7 Dec 2021 22:47:18 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning\n for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Wed, Dec 8, 2021 at 4:17 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 11/18/21, 8:27 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Here's the v4 patch with the above changes, the output looks like [1].\n> > Please review it further.\n>\n> I agree with Tom. I would just s/server/backend/ (as per the\n> attached) and call it a day.\n\nThanks. v5 patch looks good to me.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 8 Dec 2021 06:50:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On 12/7/21, 5:21 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> On Wed, Dec 8, 2021 at 4:17 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> I agree with Tom. I would just s/server/backend/ (as per the\r\n>> attached) and call it a day.\r\n>\r\n> Thanks. v5 patch looks good to me.\r\n\r\nI've marked the commitfest entry as ready-for-committer.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 8 Dec 2021 03:51:48 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning\n for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On Tue, Dec 7, 2021 at 10:51 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 12/7/21, 5:21 PM, \"Bharath Rupireddy\" <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Wed, Dec 8, 2021 at 4:17 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> I agree with Tom. I would just s/server/backend/ (as per the\n> >> attached) and call it a day.\n> >\n> > Thanks. v5 patch looks good to me.\n>\n> I've marked the commitfest entry as ready-for-committer.\n\nI pushed this with one small change -- I felt the comment didn't need\nto explain the warning message, since it now simply matches the coding\nmore exactly. Also, v5 was a big enough change from v4 that I put\nNathan as the first author.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Jan 2022 13:03:24 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning for pg_terminate_backend(<<postmaster_pid>>)?" }, { "msg_contents": "On 1/11/22, 10:06 AM, \"John Naylor\" <john.naylor@enterprisedb.com> wrote:\r\n> I pushed this with one small change -- I felt the comment didn't need\r\n> to explain the warning message, since it now simply matches the coding\r\n> more exactly. Also, v5 was a big enough change from v4 that I put\r\n> Nathan as the first author.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 11 Jan 2022 18:07:52 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Should we improve \"PID XXXX is not a PostgreSQL server process\"\n warning\n for pg_terminate_backend(<<postmaster_pid>>)?" } ]
[ { "msg_contents": "See\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=805093113df3f09979cb0e55e857976aad77b8af\n\nPlease send any corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Feb 2021 15:07:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "First-draft release notes for back branches are up" } ]
[ { "msg_contents": "While working on [1] I ended up running into a couple issues with the\r\nKerberos test suite. Attached are two patches with possible\r\nimprovements:\r\n\r\nSome tests check for expected log messages. They currently search\r\nthrough the entire log file, from the beginning, for every match. So if\r\ntwo tests share the same expected log content (which is common), it's\r\npossible for the second test to get a false positive by matching\r\nagainst the first test's output. (You can see this by modifying one of\r\nthe expected-failure tests to expect the same output as a previous\r\nhappy-path test.)\r\n\r\nThe first patch stores the offset of the previous match, and searches\r\nforward from there during the next match, resetting the offset every\r\ntime the log file changes. This isn't perfect -- it could still result\r\nin false positives if one test spits out two or more matching log lines\r\nand only matches the first one by itself -- but searching forward\r\nshould be an improvement over what's there now.\r\n\r\nThe second patch is more of a quality-of-life improvement for devs. On\r\na failed log match, the test will spin for three minutes before giving\r\nup on the match. I think this is excessive, especially since\r\ninterrupting the test with Ctrl-C leaves behind a running KDC daemon.\r\nThe patch reduces the timeout to three seconds. I guess the only\r\nquestion I have is whether there are any underpowered machines out\r\nthere running this test, relying on the higher timeout to function.\r\n\r\n--Jacob\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/a9ee5e4e8e844d06c2bcf70c6ed3306ccb4897f1.camel%40vmware.com", "msg_date": "Fri, 5 Feb 2021 20:22:40 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "More test/kerberos tweaks" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> The second patch is more of a quality-of-life improvement for devs. On\n> a failed log match, the test will spin for three minutes before giving\n> up on the match. I think this is excessive, especially since\n> interrupting the test with Ctrl-C leaves behind a running KDC daemon.\n> The patch reduces the timeout to three seconds. I guess the only\n> question I have is whether there are any underpowered machines out\n> there running this test, relying on the higher timeout to function.\n\nWe have, almost invariably, regretted it when we tried to use short\ntimeouts in test cases.\n\nI checked the buildfarm logs for the past month to see which machines\nare running the kerberos test and what their reported stage runtimes\nwere. There are just three:\n\n system min time max time\n\n crake | 00:00:09 | 00:01:16\n eelpout | 00:00:00 | 00:00:01\n elver | 00:00:04 | 00:00:09\n\nI'm not sure what's happening on crake to give it such a wide range\nof runtimes on this test, but I can't help thinking it would probably\nhave failed a few of those runs with a three-second timeout.\n\nMore generally, sometimes people want to do things like run a test\nunder valgrind. So it's not just \"underpowered machines\" that may\nneed a generous timeout. Even if we did reduce the default, I'd\nwant a way (probably via an environment variable, cf PGCTLTIMEOUT)\nto kick it back up.\n\nOn the whole, I think the right thing to be doing here is not so\nmuch messing with the timeout as fixing the test script to be\nmore robust against control-C. If it's failing to shut down the\nKDC, I'd say that's a test bug.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Feb 2021 15:55:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: More test/kerberos tweaks" }, { "msg_contents": "On Fri, 2021-02-05 at 15:55 -0500, Tom Lane wrote:\r\n> We have, almost invariably, regretted it when we tried to use short\r\n> timeouts in test cases.\r\n\r\nThat's what I was afraid of. I can work around it easily enough on my\r\nlocal machine, so it's not really a blocker in any sense.\r\n\r\nThat just leaves the first patch, then.\r\n\r\n> On the whole, I think the right thing to be doing here is not so\r\n> much messing with the timeout as fixing the test script to be\r\n> more robust against control-C. If it's failing to shut down the\r\n> KDC, I'd say that's a test bug.\r\n\r\nAgreed. I'm trying to limit the amount of test churn I introduce, since\r\nI don't speak Perl very well. :D\r\n\r\nReworking the log checks so that they didn't need timeouts (e.g. by\r\nstopping the server or otherwise flushing the logs, a la the\r\nssl_passphrase_callback tests) would be another approach. I'll jot it\r\ndown to look into later.\r\n\r\n--Jacob\r\n", "msg_date": "Fri, 5 Feb 2021 21:54:58 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: More test/kerberos tweaks" }, { "msg_contents": "On Fri, 2021-02-05 at 21:54 +0000, Jacob Champion wrote:\r\n> That just leaves the first patch, then.\r\n\r\nI've moved the first patch into the commitfest entry for [1], which\r\ndepends on it.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/flat/c55788dd1773c521c862e8e0dddb367df51222be.camel%40vmware.com\r\n", "msg_date": "Fri, 26 Feb 2021 19:48:50 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: More test/kerberos tweaks" } ]
[ { "msg_contents": "Hello,\n\nThe attached patch is for supporting \"TRUNCATE\" on foreign tables.\n\nThis patch includes:\n* Adding \"ExecForeignTruncate\" function into FdwRoutine.\n* Enabling \"postgres_fdw\" to use TRUNCATE.\n\nThis patch was proposed by Kaigai-san in March 2020,\nbut it was returned because it can't be applied to the latest source codes.\n\nPlease refer to the discussion.\nhttps://www.postgresql.org/message-id/flat/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com#3b6c6ff85ff5c722b36c7a09b2dd7165\n\nI have fixed the patch due to submit it to Commit Fest 2021-03.\n\nregards,\n\n--\n------------------\nKazutaka Onishi\n(onishi@heterodb.com)", "msg_date": "Sat, 6 Feb 2021 22:11:14 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "TRUNCATE on foreign table" }, { "msg_contents": "Hi,\n+ if (strcmp(defel->defname, \"truncatable\") == 0)\n+ server_truncatable = defGetBoolean(defel);\n\nLooks like we can break out of the loop when the condition is met.\n\n+ /* ExecForeignTruncate() is invoked for each server */\n\nThe method name in the comment is slightly different from the actual method\nname.\n\n+ if (strcmp(defel->defname, \"truncatable\") == 0)\n+ truncatable = defGetBoolean(defel);\n\nWe can break out of the loop when the condition is met.\n\nCheers\n\nOn Sat, Feb 6, 2021 at 5:11 AM Kazutaka Onishi <onishi@heterodb.com> wrote:\n\n> Hello,\n>\n> The attached patch is for supporting \"TRUNCATE\" on foreign tables.\n>\n> This patch includes:\n> * Adding \"ExecForeignTruncate\" function into FdwRoutine.\n> * Enabling \"postgres_fdw\" to use TRUNCATE.\n>\n> This patch was proposed by Kaigai-san in March 2020,\n> but it was returned because it can't be applied to the latest source codes.\n>\n> Please refer to the discussion.\n>\n> https://www.postgresql.org/message-id/flat/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com#3b6c6ff85ff5c722b36c7a09b2dd7165\n>\n> I have fixed the patch due to submit it to Commit Fest 2021-03.\n>\n> regards,\n>\n> --\n> ------------------\n> Kazutaka Onishi\n> (onishi@heterodb.com)\n>\n\nHi,+               if (strcmp(defel->defname, \"truncatable\") == 0)+                   server_truncatable = defGetBoolean(defel);Looks like we can break out of the loop when the condition is met.+           /* ExecForeignTruncate() is invoked for each server */The method name in the comment is slightly different from the actual method name.+           if (strcmp(defel->defname, \"truncatable\") == 0)+               truncatable = defGetBoolean(defel);We can break out of the loop when the condition is met.CheersOn Sat, Feb 6, 2021 at 5:11 AM Kazutaka Onishi <onishi@heterodb.com> wrote:Hello, The attached patch is for supporting \"TRUNCATE\" on  foreign tables.This patch includes:* Adding \"ExecForeignTruncate\" function into FdwRoutine.* Enabling \"postgres_fdw\" to use TRUNCATE.This patch was proposed by Kaigai-san in March 2020, but it was returned because it can't be applied to the latest source codes.Please refer to the discussion.https://www.postgresql.org/message-id/flat/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com#3b6c6ff85ff5c722b36c7a09b2dd7165I have fixed the patch due to submit it to Commit Fest 2021-03.  regards,--------------------Kazutaka Onishi(onishi@heterodb.com)", "msg_date": "Sat, 6 Feb 2021 09:10:00 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Thank you for your comment! :D\nI have fixed it and attached the revised patch.\n\nregards,\n\n\n\n2021年2月7日(日) 2:08 Zhihong Yu <zyu@yugabyte.com>:\n\n> Hi,\n> + if (strcmp(defel->defname, \"truncatable\") == 0)\n> + server_truncatable = defGetBoolean(defel);\n>\n> Looks like we can break out of the loop when the condition is met.\n>\n> + /* ExecForeignTruncate() is invoked for each server */\n>\n> The method name in the comment is slightly different from the actual\n> method name.\n>\n> + if (strcmp(defel->defname, \"truncatable\") == 0)\n> + truncatable = defGetBoolean(defel);\n>\n> We can break out of the loop when the condition is met.\n>\n> Cheers\n>\n> On Sat, Feb 6, 2021 at 5:11 AM Kazutaka Onishi <onishi@heterodb.com>\n> wrote:\n>\n>> Hello,\n>>\n>> The attached patch is for supporting \"TRUNCATE\" on foreign tables.\n>>\n>> This patch includes:\n>> * Adding \"ExecForeignTruncate\" function into FdwRoutine.\n>> * Enabling \"postgres_fdw\" to use TRUNCATE.\n>>\n>> This patch was proposed by Kaigai-san in March 2020,\n>> but it was returned because it can't be applied to the latest source\n>> codes.\n>>\n>> Please refer to the discussion.\n>>\n>> https://www.postgresql.org/message-id/flat/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com#3b6c6ff85ff5c722b36c7a09b2dd7165\n>>\n>> I have fixed the patch due to submit it to Commit Fest 2021-03.\n>>\n>> regards,\n>>\n>> --\n>> ------------------\n>> Kazutaka Onishi\n>> (onishi@heterodb.com)\n>>\n>\n\n-- \n------------------\nKazutaka Onishi\n(onishi@heterodb.com)\n\n\n-- \n------------------\nKazutaka Onishi\n(onishi@heterodb.com)", "msg_date": "Sun, 7 Feb 2021 21:36:13 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "IIUC, \"truncatable\" would be set to \"false\" for relations which do not\nhave physical storage e.g. views but will be true for regular tables.\nWhen we are importing schema we need to set \"truncatable\"\nappropriately. Is that something we will support with this patch?\n\nWhy would one want to truncate a foreign table instead of truncating\nactual table wherever it is?\n\nOn Sun, Feb 7, 2021 at 6:06 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> Thank you for your comment! :D\n> I have fixed it and attached the revised patch.\n>\n> regards,\n>\n>\n>\n> 2021年2月7日(日) 2:08 Zhihong Yu <zyu@yugabyte.com>:\n>>\n>> Hi,\n>> + if (strcmp(defel->defname, \"truncatable\") == 0)\n>> + server_truncatable = defGetBoolean(defel);\n>>\n>> Looks like we can break out of the loop when the condition is met.\n>>\n>> + /* ExecForeignTruncate() is invoked for each server */\n>>\n>> The method name in the comment is slightly different from the actual method name.\n>>\n>> + if (strcmp(defel->defname, \"truncatable\") == 0)\n>> + truncatable = defGetBoolean(defel);\n>>\n>> We can break out of the loop when the condition is met.\n>>\n>> Cheers\n>>\n>> On Sat, Feb 6, 2021 at 5:11 AM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>>>\n>>> Hello,\n>>>\n>>> The attached patch is for supporting \"TRUNCATE\" on foreign tables.\n>>>\n>>> This patch includes:\n>>> * Adding \"ExecForeignTruncate\" function into FdwRoutine.\n>>> * Enabling \"postgres_fdw\" to use TRUNCATE.\n>>>\n>>> This patch was proposed by Kaigai-san in March 2020,\n>>> but it was returned because it can't be applied to the latest source codes.\n>>>\n>>> Please refer to the discussion.\n>>> https://www.postgresql.org/message-id/flat/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com#3b6c6ff85ff5c722b36c7a09b2dd7165\n>>>\n>>> I have fixed the patch due to submit it to Commit Fest 2021-03.\n>>>\n>>> regards,\n>>>\n>>> --\n>>> ------------------\n>>> Kazutaka Onishi\n>>> (onishi@heterodb.com)\n>\n>\n>\n> --\n> ------------------\n> Kazutaka Onishi\n> (onishi@heterodb.com)\n>\n>\n> --\n> ------------------\n> Kazutaka Onishi\n> (onishi@heterodb.com)\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 9 Feb 2021 17:31:32 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Feb 9, 2021 at 5:31 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Why would one want to truncate a foreign table instead of truncating\n> actual table wherever it is?\n\nI think when the deletion on foreign tables (which actually deletes\nrows from the remote table?) is allowed, it does make sense to have a\nway to truncate the remote table via foreign table. Also, it can avoid\ngoing to each and every remote server and doing the truncation\ninstead.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 17:49:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Feb 9, 2021 at 5:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 5:31 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Why would one want to truncate a foreign table instead of truncating\n> > actual table wherever it is?\n>\n> I think when the deletion on foreign tables (which actually deletes\n> rows from the remote table?) is allowed, it does make sense to have a\n> way to truncate the remote table via foreign table. Also, it can avoid\n> going to each and every remote server and doing the truncation\n> instead.\n\nDELETE is very different from TRUNCATE. Application may want to DELETE\nbased on a join with a local table and hence it can not be executed on\na foreign server. That's not true with TRUNCATE.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 9 Feb 2021 18:00:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "> IIUC, \"truncatable\" would be set to \"false\" for relations which do not\n> have physical storage e.g. views but will be true for regular tables.\n\n\"truncatable\" option is just for the foreign table and it's not related\nwith whether it's on a physical storage or not.\n\"postgres_fdw\" already has \"updatable\" option to make the table read-only.\nHowever, \"updatable\" is for DML, and it's not suitable for TRUNCATE.\nTherefore new options \"truncatable\" was added.\n\nPlease refer to this message for details.\nhttps://www.postgresql.org/message-id/20200128040346.GC1552%40paquier.xyz\n\n> DELETE is very different from TRUNCATE. Application may want to DELETE\n> based on a join with a local table and hence it can not be executed on\n> a foreign server. That's not true with TRUNCATE.\n\nYeah, As you say, Applications doesn't need TRUNCATE.\nWe're focusing for analytical use, namely operating huge data.\nTRUNCATE can erase rows faster than DELETE.\n\n> IIUC, \"truncatable\" would be set to \"false\" for relations which do not> have physical storage e.g. views but will be true for regular tables.\"truncatable\" option is just for the foreign table and it's not related with whether it's on a physical storage or not.\"postgres_fdw\" already has \"updatable\" option to make the table read-only.However, \"updatable\" is for DML, and it's not suitable for TRUNCATE. Therefore new options \"truncatable\" was added.Please refer to this message for details.https://www.postgresql.org/message-id/20200128040346.GC1552%40paquier.xyz> DELETE is very different from TRUNCATE. Application may want to DELETE> based on a join with a local table and hence it can not be executed on> a foreign server. That's not true with TRUNCATE.Yeah, As you say, Applications doesn't need \n\nTRUNCATE.We're focusing for analytical use, namely operating huge data.TRUNCATE can erase rows faster than DELETE.", "msg_date": "Tue, 9 Feb 2021 23:15:03 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Feb 9, 2021 at 7:45 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> > IIUC, \"truncatable\" would be set to \"false\" for relations which do not\n> > have physical storage e.g. views but will be true for regular tables.\n>\n> \"truncatable\" option is just for the foreign table and it's not related with whether it's on a physical storage or not.\n> \"postgres_fdw\" already has \"updatable\" option to make the table read-only.\n> However, \"updatable\" is for DML, and it's not suitable for TRUNCATE.\n> Therefore new options \"truncatable\" was added.\n>\n> Please refer to this message for details.\n> https://www.postgresql.org/message-id/20200128040346.GC1552%40paquier.xyz\n>\n> > DELETE is very different from TRUNCATE. Application may want to DELETE\n> > based on a join with a local table and hence it can not be executed on\n> > a foreign server. That's not true with TRUNCATE.\n>\n> Yeah, As you say, Applications doesn't need TRUNCATE.\n> We're focusing for analytical use, namely operating huge data.\n> TRUNCATE can erase rows faster than DELETE.\n\nThe question is why can't that truncate be run on the foreign server\nitself rather than local server?\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 10 Feb 2021 10:25:09 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "That's because using the foreign server is difficult for the user.\n\nFor example, the user doesn't always have the permission to login to\nthe forein server.\nIn some cases, the foreign table has been created by the administrator that\nhas permission to access the two servers and the user only uses the local\nserver.\nThen the user has to ask the administrator to run TRUNCATE every time.\n\nFurthermore,there are some fdw extensions which don't support SQL.\nmongo_fdw, redis_fdw, etc...\nThese extensions have been used to provide SQL interfaces to the users.\nIt's hard for the user to run TRUNCATE after learning each database.\n\nAnyway, it's more useful if the user can run queries in one place, right?\nDo you have any concerns?\n\nThat's because using the foreign server is difficult for the user.For example, the user doesn't always have the permission to login to the forein server.In some cases, the foreign table has been created by the administrator that has permission to access the two servers and the user only uses the local server.Then the user has to ask the administrator to run TRUNCATE every time.Furthermore,there are some fdw extensions which don't support SQL. mongo_fdw, redis_fdw, etc...These extensions have been used to provide SQL interfaces to the users.It's hard for the user to run TRUNCATE after learning each database.Anyway, it's more useful if the user can run queries in one place, right?Do you have any concerns?", "msg_date": "Thu, 11 Feb 2021 02:27:54 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年2月10日(水) 13:55 Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>:\n>\n> On Tue, Feb 9, 2021 at 7:45 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> >\n> > > IIUC, \"truncatable\" would be set to \"false\" for relations which do not\n> > > have physical storage e.g. views but will be true for regular tables.\n> >\n> > \"truncatable\" option is just for the foreign table and it's not related with whether it's on a physical storage or not.\n> > \"postgres_fdw\" already has \"updatable\" option to make the table read-only.\n> > However, \"updatable\" is for DML, and it's not suitable for TRUNCATE.\n> > Therefore new options \"truncatable\" was added.\n> >\n> > Please refer to this message for details.\n> > https://www.postgresql.org/message-id/20200128040346.GC1552%40paquier.xyz\n> >\n> > > DELETE is very different from TRUNCATE. Application may want to DELETE\n> > > based on a join with a local table and hence it can not be executed on\n> > > a foreign server. That's not true with TRUNCATE.\n> >\n> > Yeah, As you say, Applications doesn't need TRUNCATE.\n> > We're focusing for analytical use, namely operating huge data.\n> > TRUNCATE can erase rows faster than DELETE.\n>\n> The question is why can't that truncate be run on the foreign server\n> itself rather than local server?\n>\nAt least, PostgreSQL applies different access permissions on TRUNCATE.\nIf unconditional DELETE implicitly promotes to TRUNCATE, DB administrator\nhas to allow TRUNCATE permission on the remote table also.\n\nAlso, TRUNCATE acquires stronger lock the DELETE.\nDELETE still allows concurrent accesses to the table, even though TRUNCATE\ntakes AccessExclusive lock, thus, FDW driver has to control the\nconcurrent accesses\nby itself, if we have no dedicated TRUNCATE interface.\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 11 Feb 2021 09:39:56 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Wed, Feb 10, 2021 at 10:58 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> That's because using the foreign server is difficult for the user.\n>\n> For example, the user doesn't always have the permission to login to the forein server.\n> In some cases, the foreign table has been created by the administrator that has permission to access the two servers and the user only uses the local server.\n> Then the user has to ask the administrator to run TRUNCATE every time.\n\nThat might actually be seen as a loophole but ...\n\n>\n> Furthermore,there are some fdw extensions which don't support SQL. mongo_fdw, redis_fdw, etc...\n> These extensions have been used to provide SQL interfaces to the users.\n> It's hard for the user to run TRUNCATE after learning each database.\n\nthis has some appeal.\n\nThanks for sharing the usecases.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 11 Feb 2021 18:53:24 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Feb 11, 2021 at 6:23 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Wed, Feb 10, 2021 at 10:58 PM Kazutaka Onishi <onishi@heterodb.com>\n> wrote:\n> >\n> > That's because using the foreign server is difficult for the user.\n> >\n> > For example, the user doesn't always have the permission to login to the\n> forein server.\n> > In some cases, the foreign table has been created by the administrator\n> that has permission to access the two servers and the user only uses the\n> local server.\n> > Then the user has to ask the administrator to run TRUNCATE every time.\n>\n> That might actually be seen as a loophole but ...\n>\n> >\n> > Furthermore,there are some fdw extensions which don't support SQL.\n> mongo_fdw, redis_fdw, etc...\n> > These extensions have been used to provide SQL interfaces to the users.\n> > It's hard for the user to run TRUNCATE after learning each database.\n>\n> this has some appeal.\n>\n> Thanks for sharing the usecases.\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n>\n> The patch (pgsql14-truncate-on-foreign-table.v2.patch) does not apply\nsuccessfully.\n\nhttp://cfbot.cputube.org/patch_32_2972.log\n\npatching file contrib/postgres_fdw/expected/postgres_fdw.out\nHunk #2 FAILED at 9179.\n1 out of 2 hunks FAILED -- saving rejects to file\ncontrib/postgres_fdw/expected/postgres_fdw.out.rej\n\n\nAs this is a minor change therefore I have updated the patch. Please take a\nlook.\n\n--\nIbrar Ahmed", "msg_date": "Mon, 8 Mar 2021 22:24:03 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Mar 9, 2021 at 2:24 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> The patch (pgsql14-truncate-on-foreign-table.v2.patch) does not apply successfully.\n>\n> http://cfbot.cputube.org/patch_32_2972.log\n>\n> patching file contrib/postgres_fdw/expected/postgres_fdw.out\n> Hunk #2 FAILED at 9179.\n> 1 out of 2 hunks FAILED -- saving rejects to file contrib/postgres_fdw/expected/postgres_fdw.out.rej\n>\n> As this is a minor change therefore I have updated the patch. Please take a look.\n\nThanks for updating the patch. I was able to apply it successfully\nthough I notice it doesn't pass make check-world.\n\nSpecifically, it fails the src/test/subscription/013_partition.pl\ntest. The problem seems to be that worker.c: apply_handle_truncate()\nhasn't been updated to add entries to relids_extra for partitions\nexpanded from a partitioned table, like ExecuteTruncate() does. That\nleads to relids and relids_extra having different lengths, which trips\nthe Assert in ExecuteTruncateGuts().\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Mar 2021 11:53:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "To Ibrar,\nThank you for updating the patch!\n\nTo Amit,\nThank you for checking the patch, and I have confirmed the failure.\nNow I'm trying to fix it.\n\n\n\n2021年3月9日(火) 11:54 Amit Langote <amitlangote09@gmail.com>:\n\n> On Tue, Mar 9, 2021 at 2:24 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> > The patch (pgsql14-truncate-on-foreign-table.v2.patch) does not apply\n> successfully.\n> >\n> > http://cfbot.cputube.org/patch_32_2972.log\n> >\n> > patching file contrib/postgres_fdw/expected/postgres_fdw.out\n> > Hunk #2 FAILED at 9179.\n> > 1 out of 2 hunks FAILED -- saving rejects to file\n> contrib/postgres_fdw/expected/postgres_fdw.out.rej\n> >\n> > As this is a minor change therefore I have updated the patch. Please\n> take a look.\n>\n> Thanks for updating the patch. I was able to apply it successfully\n> though I notice it doesn't pass make check-world.\n>\n> Specifically, it fails the src/test/subscription/013_partition.pl\n> test. The problem seems to be that worker.c: apply_handle_truncate()\n> hasn't been updated to add entries to relids_extra for partitions\n> expanded from a partitioned table, like ExecuteTruncate() does. That\n> leads to relids and relids_extra having different lengths, which trips\n> the Assert in ExecuteTruncateGuts().\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nTo Ibrar, Thank you for updating the patch!To Amit,Thank you for checking the patch, and I have confirmed the failure.Now I'm trying to fix it.2021年3月9日(火) 11:54 Amit Langote <amitlangote09@gmail.com>:On Tue, Mar 9, 2021 at 2:24 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n> The patch (pgsql14-truncate-on-foreign-table.v2.patch) does not apply successfully.\n>\n> http://cfbot.cputube.org/patch_32_2972.log\n>\n> patching file contrib/postgres_fdw/expected/postgres_fdw.out\n> Hunk #2 FAILED at 9179.\n> 1 out of 2 hunks FAILED -- saving rejects to file contrib/postgres_fdw/expected/postgres_fdw.out.rej\n>\n> As this is a minor change therefore I have updated the patch. Please take a look.\n\nThanks for updating the patch.  I was able to apply it successfully\nthough I notice it doesn't pass make check-world.\n\nSpecifically, it fails the src/test/subscription/013_partition.pl\ntest.  The problem seems to be that worker.c: apply_handle_truncate()\nhasn't been updated to add entries to relids_extra for partitions\nexpanded from a partitioned table, like ExecuteTruncate() does.  That\nleads to relids and relids_extra having different lengths, which trips\nthe Assert in ExecuteTruncateGuts().\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 13 Mar 2021 12:35:46 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "I have fixed the patch to pass check-world test. :D\n\n2021年3月13日(土) 12:35 Kazutaka Onishi <onishi@heterodb.com>:\n\n> To Ibrar,\n> Thank you for updating the patch!\n>\n> To Amit,\n> Thank you for checking the patch, and I have confirmed the failure.\n> Now I'm trying to fix it.\n>\n>\n>\n> 2021年3月9日(火) 11:54 Amit Langote <amitlangote09@gmail.com>:\n>\n>> On Tue, Mar 9, 2021 at 2:24 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:\n>> > The patch (pgsql14-truncate-on-foreign-table.v2.patch) does not apply\n>> successfully.\n>> >\n>> > http://cfbot.cputube.org/patch_32_2972.log\n>> >\n>> > patching file contrib/postgres_fdw/expected/postgres_fdw.out\n>> > Hunk #2 FAILED at 9179.\n>> > 1 out of 2 hunks FAILED -- saving rejects to file\n>> contrib/postgres_fdw/expected/postgres_fdw.out.rej\n>> >\n>> > As this is a minor change therefore I have updated the patch. Please\n>> take a look.\n>>\n>> Thanks for updating the patch. I was able to apply it successfully\n>> though I notice it doesn't pass make check-world.\n>>\n>> Specifically, it fails the src/test/subscription/013_partition.pl\n>> test. The problem seems to be that worker.c: apply_handle_truncate()\n>> hasn't been updated to add entries to relids_extra for partitions\n>> expanded from a partitioned table, like ExecuteTruncate() does. That\n>> leads to relids and relids_extra having different lengths, which trips\n>> the Assert in ExecuteTruncateGuts().\n>>\n>> --\n>> Amit Langote\n>> EDB: http://www.enterprisedb.com\n>>\n>", "msg_date": "Sat, 13 Mar 2021 18:57:20 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/03/13 18:57, Kazutaka Onishi wrote:\n> I have fixed the patch to pass check-world test. :D\n\nThanks for updating the patch! Here are some review comments from me.\n\n\n By default all foreign tables using <filename>postgres_fdw</filename> are assumed\n to be updatable. This may be overridden using the following option:\n\nIn postgres-fdw.sgml, \"and truncatable\" should be appended into\nthe above first description? Also \"option\" in the second description\nshould be a plural form \"options\"?\n\n\n <command>TRUNCATE</command> is not currently supported for foreign tables.\n This implies that if a specified table has any descendant tables that are\n foreign, the command will fail.\n\ntruncate.sgml should be updated because, for example, it contains\nthe above descriptions.\n\n\n+ <literal>frels_extra</literal> is same length with\n+ <literal>frels_list</literal>, that delivers extra information of\n+ the context where the foreign-tables are truncated.\n+ </para>\n\nDon't we need to document the detail information about frels_extra?\nOtherwise the developers of FDW would fail to understand how to\nhandle the frels_extra when trying to make their FDWs support TRUNCATE.\n\n\n+\t\trelids_extra = lappend_int(relids_extra, (recurse ? 0 : 1));\n+\t\t\t\trelids_extra = lappend_int(relids_extra, -1);\n\npostgres_fdw determines whether to specify ONLY or not by checking\nwhether the passed extra value is zero or not. That is, for example,\nusing only 0 and 1 for extra values is enough for the purpose. But\nExecuteTruncate() sets three values 0, -1 and 1 as extra ones. Why are\nthese three values necessary?\n\n\nWith the patch, if both local and foreign tables are specified as\nthe target tables to truncate, TRUNCATE command tries to truncate\nforeign tables after truncating local ones. That is, if \"truncatable\"\noption is set to false or enough permission to truncate is not granted\nyet in the foreign server, an error will be thrown after the local tables\nare truncated. I don't think this is good order of processings. IMO,\ninstead, we should check whether foreign tables can be truncated\nbefore any actual truncation operations. For example, we can easily\ndo that by truncate foreign tables before local ones. Thought?\n\n\nXLOG_HEAP_TRUNCATE record is written even for the truncation of\na foreign table. Why is this necessary?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 03:47:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Fujii-san,\n\nThank you for your review!\nNow I prepare v5 patch and I'll answer to your each comment. please\ncheck this again.\nm(_ _)m\n\n1. In postgres-fdw.sgml, \"and truncatable\" should be appended into the\nabove first description?\n2. truncate.sgml should be updated because, for example, it contains\nthe above descriptions.\n\nYeah, you're right. I've fixed it.\n\n\n\n3. Don't we need to document the detail information about frels_extra?\n\nI've written about frels_extra into fdwhander.sgml.\n\n\n\n4. postgres_fdw determines whether to specify ONLY or not by checking\nwhether the passed extra value is zero or not.\n\nPlease refer this:\nhttps://www.postgresql.org/message-id/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com\n> Negative value means that foreign-tables are not specified in the TRUNCATE\n> command, but truncated due to dependency (like partition's child leaf).\n\nI've added this information into fdwhandler.sgml.\n\n\n\n5. For example, we can easily do that by truncate foreign tables\nbefore local ones. Thought?\n\nUmm... yeah, I feel it's better procedure, but not so required because\nTRUNCATE is NOT called frequently.\nCertainly, we already have postgresIsForeignUpdatable() to check\nwhether the foreign table is updatable or not.\nFollowing this way, we have to add postgresIsForeignTruncatable() to check.\nHowever, Unlike UPDATE, TRUNCATE is NOT called frequently. Current\nprocedure is inefficient but works correctly.\nThus, I feel postgresIsForeignTruncatable() is not needed.\n\n\n6. XLOG_HEAP_TRUNCATE record is written even for the truncation of a\nforeign table. Why is this necessary?\n\nPlease give us more time to investigate this.\n\n2021年3月25日(木) 3:47 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n>\n>\n> On 2021/03/13 18:57, Kazutaka Onishi wrote:\n> > I have fixed the patch to pass check-world test. :D\n>\n> Thanks for updating the patch! Here are some review comments from me.\n>\n>\n> By default all foreign tables using <filename>postgres_fdw</filename> are assumed\n> to be updatable. This may be overridden using the following option:\n>\n> In postgres-fdw.sgml, \"and truncatable\" should be appended into\n> the above first description? Also \"option\" in the second description\n> should be a plural form \"options\"?\n>\n>\n> <command>TRUNCATE</command> is not currently supported for foreign tables.\n> This implies that if a specified table has any descendant tables that are\n> foreign, the command will fail.\n>\n> truncate.sgml should be updated because, for example, it contains\n> the above descriptions.\n>\n>\n> + <literal>frels_extra</literal> is same length with\n> + <literal>frels_list</literal>, that delivers extra information of\n> + the context where the foreign-tables are truncated.\n> + </para>\n>\n> Don't we need to document the detail information about frels_extra?\n> Otherwise the developers of FDW would fail to understand how to\n> handle the frels_extra when trying to make their FDWs support TRUNCATE.\n>\n>\n> + relids_extra = lappend_int(relids_extra, (recurse ? 0 : 1));\n> + relids_extra = lappend_int(relids_extra, -1);\n>\n> postgres_fdw determines whether to specify ONLY or not by checking\n> whether the passed extra value is zero or not. That is, for example,\n> using only 0 and 1 for extra values is enough for the purpose. But\n> ExecuteTruncate() sets three values 0, -1 and 1 as extra ones. Why are\n> these three values necessary?\n>\n>\n> With the patch, if both local and foreign tables are specified as\n> the target tables to truncate, TRUNCATE command tries to truncate\n> foreign tables after truncating local ones. That is, if \"truncatable\"\n> option is set to false or enough permission to truncate is not granted\n> yet in the foreign server, an error will be thrown after the local tables\n> are truncated. I don't think this is good order of processings. IMO,\n> instead, we should check whether foreign tables can be truncated\n> before any actual truncation operations. For example, we can easily\n> do that by truncate foreign tables before local ones. Thought?\n>\n>\n> XLOG_HEAP_TRUNCATE record is written even for the truncation of\n> a foreign table. Why is this necessary?\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION", "msg_date": "Sun, 28 Mar 2021 02:37:06 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Onishi-san,\n\nThe v5 patch contains full-contents of \"src/backend/commands/tablecmds.c.orig\".\nPlease check it.\n\n2021年3月28日(日) 2:37 Kazutaka Onishi <onishi@heterodb.com>:\n>\n> Fujii-san,\n>\n> Thank you for your review!\n> Now I prepare v5 patch and I'll answer to your each comment. please\n> check this again.\n> m(_ _)m\n>\n> 1. In postgres-fdw.sgml, \"and truncatable\" should be appended into the\n> above first description?\n> 2. truncate.sgml should be updated because, for example, it contains\n> the above descriptions.\n>\n> Yeah, you're right. I've fixed it.\n>\n>\n>\n> 3. Don't we need to document the detail information about frels_extra?\n>\n> I've written about frels_extra into fdwhander.sgml.\n>\n>\n>\n> 4. postgres_fdw determines whether to specify ONLY or not by checking\n> whether the passed extra value is zero or not.\n>\n> Please refer this:\n> https://www.postgresql.org/message-id/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com\n> > Negative value means that foreign-tables are not specified in the TRUNCATE\n> > command, but truncated due to dependency (like partition's child leaf).\n>\n> I've added this information into fdwhandler.sgml.\n>\n>\n>\n> 5. For example, we can easily do that by truncate foreign tables\n> before local ones. Thought?\n>\n> Umm... yeah, I feel it's better procedure, but not so required because\n> TRUNCATE is NOT called frequently.\n> Certainly, we already have postgresIsForeignUpdatable() to check\n> whether the foreign table is updatable or not.\n> Following this way, we have to add postgresIsForeignTruncatable() to check.\n> However, Unlike UPDATE, TRUNCATE is NOT called frequently. Current\n> procedure is inefficient but works correctly.\n> Thus, I feel postgresIsForeignTruncatable() is not needed.\n>\n>\n> 6. XLOG_HEAP_TRUNCATE record is written even for the truncation of a\n> foreign table. Why is this necessary?\n>\n> Please give us more time to investigate this.\n>\n> 2021年3月25日(木) 3:47 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >\n> >\n> >\n> > On 2021/03/13 18:57, Kazutaka Onishi wrote:\n> > > I have fixed the patch to pass check-world test. :D\n> >\n> > Thanks for updating the patch! Here are some review comments from me.\n> >\n> >\n> > By default all foreign tables using <filename>postgres_fdw</filename> are assumed\n> > to be updatable. This may be overridden using the following option:\n> >\n> > In postgres-fdw.sgml, \"and truncatable\" should be appended into\n> > the above first description? Also \"option\" in the second description\n> > should be a plural form \"options\"?\n> >\n> >\n> > <command>TRUNCATE</command> is not currently supported for foreign tables.\n> > This implies that if a specified table has any descendant tables that are\n> > foreign, the command will fail.\n> >\n> > truncate.sgml should be updated because, for example, it contains\n> > the above descriptions.\n> >\n> >\n> > + <literal>frels_extra</literal> is same length with\n> > + <literal>frels_list</literal>, that delivers extra information of\n> > + the context where the foreign-tables are truncated.\n> > + </para>\n> >\n> > Don't we need to document the detail information about frels_extra?\n> > Otherwise the developers of FDW would fail to understand how to\n> > handle the frels_extra when trying to make their FDWs support TRUNCATE.\n> >\n> >\n> > + relids_extra = lappend_int(relids_extra, (recurse ? 0 : 1));\n> > + relids_extra = lappend_int(relids_extra, -1);\n> >\n> > postgres_fdw determines whether to specify ONLY or not by checking\n> > whether the passed extra value is zero or not. That is, for example,\n> > using only 0 and 1 for extra values is enough for the purpose. But\n> > ExecuteTruncate() sets three values 0, -1 and 1 as extra ones. Why are\n> > these three values necessary?\n> >\n> >\n> > With the patch, if both local and foreign tables are specified as\n> > the target tables to truncate, TRUNCATE command tries to truncate\n> > foreign tables after truncating local ones. That is, if \"truncatable\"\n> > option is set to false or enough permission to truncate is not granted\n> > yet in the foreign server, an error will be thrown after the local tables\n> > are truncated. I don't think this is good order of processings. IMO,\n> > instead, we should check whether foreign tables can be truncated\n> > before any actual truncation operations. For example, we can easily\n> > do that by truncate foreign tables before local ones. Thought?\n> >\n> >\n> > XLOG_HEAP_TRUNCATE record is written even for the truncation of\n> > a foreign table. Why is this necessary?\n> >\n> > Regards,\n> >\n> > --\n> > Fujii Masao\n> > Advanced Computing Technology Center\n> > Research and Development Headquarters\n> > NTT DATA CORPORATION\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Mon, 29 Mar 2021 09:03:40 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Fujii-san,\n\n> XLOG_HEAP_TRUNCATE record is written even for the truncation of\n> a foreign table. Why is this necessary?\n>\nForeign-tables are often used to access local data structure, like\ncolumnar data files\non filesystem, not only remote accesses like postgres_fdw.\nIn case when we want to implement logical replication on this kind of\nforeign-tables,\ntruncate-command must be delivered to subscriber node - to truncate\nits local data.\n\nIn case of remote-access FDW drivers, truncate-command on the subscriber-side is\nprobably waste of cycles, however, only FDW driver and DBA who configured the\nforeign-table know whether it is necessary, or not.\n\nHow about your opinions?\n\nBest regards,\n\n2021年3月25日(木) 3:47 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n>\n>\n> On 2021/03/13 18:57, Kazutaka Onishi wrote:\n> > I have fixed the patch to pass check-world test. :D\n>\n> Thanks for updating the patch! Here are some review comments from me.\n>\n>\n> By default all foreign tables using <filename>postgres_fdw</filename> are assumed\n> to be updatable. This may be overridden using the following option:\n>\n> In postgres-fdw.sgml, \"and truncatable\" should be appended into\n> the above first description? Also \"option\" in the second description\n> should be a plural form \"options\"?\n>\n>\n> <command>TRUNCATE</command> is not currently supported for foreign tables.\n> This implies that if a specified table has any descendant tables that are\n> foreign, the command will fail.\n>\n> truncate.sgml should be updated because, for example, it contains\n> the above descriptions.\n>\n>\n> + <literal>frels_extra</literal> is same length with\n> + <literal>frels_list</literal>, that delivers extra information of\n> + the context where the foreign-tables are truncated.\n> + </para>\n>\n> Don't we need to document the detail information about frels_extra?\n> Otherwise the developers of FDW would fail to understand how to\n> handle the frels_extra when trying to make their FDWs support TRUNCATE.\n>\n>\n> + relids_extra = lappend_int(relids_extra, (recurse ? 0 : 1));\n> + relids_extra = lappend_int(relids_extra, -1);\n>\n> postgres_fdw determines whether to specify ONLY or not by checking\n> whether the passed extra value is zero or not. That is, for example,\n> using only 0 and 1 for extra values is enough for the purpose. But\n> ExecuteTruncate() sets three values 0, -1 and 1 as extra ones. Why are\n> these three values necessary?\n>\n>\n> With the patch, if both local and foreign tables are specified as\n> the target tables to truncate, TRUNCATE command tries to truncate\n> foreign tables after truncating local ones. That is, if \"truncatable\"\n> option is set to false or enough permission to truncate is not granted\n> yet in the foreign server, an error will be thrown after the local tables\n> are truncated. I don't think this is good order of processings. IMO,\n> instead, we should check whether foreign tables can be truncated\n> before any actual truncation operations. For example, we can easily\n> do that by truncate foreign tables before local ones. Thought?\n>\n>\n> XLOG_HEAP_TRUNCATE record is written even for the truncation of\n> a foreign table. Why is this necessary?\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Mon, 29 Mar 2021 09:31:06 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/03/29 9:31, Kohei KaiGai wrote:\n> Fujii-san,\n> \n>> XLOG_HEAP_TRUNCATE record is written even for the truncation of\n>> a foreign table. Why is this necessary?\n>>\n> Foreign-tables are often used to access local data structure, like\n> columnar data files\n> on filesystem, not only remote accesses like postgres_fdw.\n> In case when we want to implement logical replication on this kind of\n> foreign-tables,\n> truncate-command must be delivered to subscriber node - to truncate\n> its local data.\n> \n> In case of remote-access FDW drivers, truncate-command on the subscriber-side is\n> probably waste of cycles, however, only FDW driver and DBA who configured the\n> foreign-table know whether it is necessary, or not.\n> \n> How about your opinions?\n\nI understand the motivation of this. But the other DMLs like UPDATE also\ndo the same thing for foreign tables? That is, when those DML commands\nare executed on foreign tables, their changes are WAL-logged in a publisher side,\ne.g., for logical replication? If not, it seems strange to allow only TRUNCATE\non foreign tables to be WAL-logged in a publisher side...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 29 Mar 2021 10:53:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Mon, Mar 29, 2021 at 10:53:14AM +0900, Fujii Masao wrote:\n> I understand the motivation of this. But the other DMLs like UPDATE also\n> do the same thing for foreign tables? That is, when those DML commands\n> are executed on foreign tables, their changes are WAL-logged in a publisher side,\n> e.g., for logical replication? If not, it seems strange to allow only TRUNCATE\n> on foreign tables to be WAL-logged in a publisher side...\n\nExecuting DMLs on foreign tables does not generate any WAL AFAIK with\nthe backend core code, even with wal_level = logical, as the DML is\nexecuted within the FDW callback (see just ExecUpdate() or\nExecInsert() in nodeModifyTable.c), and foreign tables don't have an\nAM set as they have no physical storage. A FDW may decide to generate\nsome WAL records by itself though when doing the opeation, using the\ngeneric WAL interface but that's rather limited.\n\nGenerating WAL for the truncation of foreign tables sounds also like a\nstrange concept to me. I think that you should just make the patch\nwork so as the truncation is passed down to the FDW that decides what\nit needs to do with it, and do nothing more than that.\n--\nMichael", "msg_date": "Mon, 29 Mar 2021 13:55:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/03/28 2:37, Kazutaka Onishi wrote:\n> Fujii-san,\n> \n> Thank you for your review!\n> Now I prepare v5 patch and I'll answer to your each comment. please\n> check this again.\n\nThanks a lot!\n\n> 5. For example, we can easily do that by truncate foreign tables\n> before local ones. Thought?\n> \n> Umm... yeah, I feel it's better procedure, but not so required because\n> TRUNCATE is NOT called frequently.\n> Certainly, we already have postgresIsForeignUpdatable() to check\n> whether the foreign table is updatable or not.\n> Following this way, we have to add postgresIsForeignTruncatable() to check.\n> However, Unlike UPDATE, TRUNCATE is NOT called frequently. Current\n> procedure is inefficient but works correctly.\n> Thus, I feel postgresIsForeignTruncatable() is not needed.\n\nI'm concerned about the case where permission errors at the remote servers\nrather than that truncatable option is disabled. The comments of\nExecuteTruncate() explains its design as follows. But the patch seems to break\nthis because it truncates the local tables before checking the permission on\nforeign tables (i.e., the local tables in remote servers)... No?\n\n We first open and grab exclusive\n lock on all relations involved, checking permissions and otherwise\n verifying that the relation is OK for truncation\n Finally all the relations are truncated and reindexed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Mar 2021 02:53:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/03/29 13:55, Michael Paquier wrote:\n> On Mon, Mar 29, 2021 at 10:53:14AM +0900, Fujii Masao wrote:\n>> I understand the motivation of this. But the other DMLs like UPDATE also\n>> do the same thing for foreign tables? That is, when those DML commands\n>> are executed on foreign tables, their changes are WAL-logged in a publisher side,\n>> e.g., for logical replication? If not, it seems strange to allow only TRUNCATE\n>> on foreign tables to be WAL-logged in a publisher side...\n> \n> Executing DMLs on foreign tables does not generate any WAL AFAIK with\n> the backend core code, even with wal_level = logical, as the DML is\n> executed within the FDW callback (see just ExecUpdate() or\n> ExecInsert() in nodeModifyTable.c), and foreign tables don't have an\n> AM set as they have no physical storage. A FDW may decide to generate\n> some WAL records by itself though when doing the opeation, using the\n> generic WAL interface but that's rather limited.\n> \n> Generating WAL for the truncation of foreign tables sounds also like a\n> strange concept to me. I think that you should just make the patch\n> work so as the truncation is passed down to the FDW that decides what\n> it needs to do with it, and do nothing more than that.\n\nAgreed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Mar 2021 02:54:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/03/28 2:37, Kazutaka Onishi wrote:\n> Fujii-san,\n> \n> Thank you for your review!\n> Now I prepare v5 patch and I'll answer to your each comment. please\n> check this again.\n> m(_ _)m\n> \n> 1. In postgres-fdw.sgml, \"and truncatable\" should be appended into the\n> above first description?\n> 2. truncate.sgml should be updated because, for example, it contains\n> the above descriptions.\n> \n> Yeah, you're right. I've fixed it.\n> \n> \n> \n> 3. Don't we need to document the detail information about frels_extra?\n> \n> I've written about frels_extra into fdwhander.sgml.\n> \n> \n> \n> 4. postgres_fdw determines whether to specify ONLY or not by checking\n> whether the passed extra value is zero or not.\n> \n> Please refer this:\n> https://www.postgresql.org/message-id/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com\n>> Negative value means that foreign-tables are not specified in the TRUNCATE\n>> command, but truncated due to dependency (like partition's child leaf).\n> \n> I've added this information into fdwhandler.sgml.\n\nEven when a foreign table is specified explicitly in TRUNCATE command,\nits extra value can be negative if it's found as an inherited children firstly\n(i.e., in the case where the partitioned table having that foreign table as\nits partition is specified explicitly in TRUNCATE command).\nIsn't this a problem?\n\nPlease imagine the following example;\n\n----------------------------------\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw;\ncreate user mapping for public server loopback;\n\ncreate table t (i int, j int) partition by hash (j);\ncreate table t0 partition of t for values with (modulus 2, remainder 0);\ncreate table t1 partition of t for values with (modulus 2, remainder 1);\n\ncreate table test (i int, j int) partition by hash (i);\ncreate table test0 partition of test for values with (modulus 2, remainder 0);\ncreate foreign table ft partition of test for values with (modulus 2, remainder 1) server loopback options (table_name 't');\n----------------------------------\n\nIn this example, \"truncate ft, test\" works fine, but \"truncate test, ft\" causes\nan error though they should work in the same way basically.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Mar 2021 03:45:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年3月30日(火) 2:54 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/03/29 13:55, Michael Paquier wrote:\n> > On Mon, Mar 29, 2021 at 10:53:14AM +0900, Fujii Masao wrote:\n> >> I understand the motivation of this. But the other DMLs like UPDATE also\n> >> do the same thing for foreign tables? That is, when those DML commands\n> >> are executed on foreign tables, their changes are WAL-logged in a publisher side,\n> >> e.g., for logical replication? If not, it seems strange to allow only TRUNCATE\n> >> on foreign tables to be WAL-logged in a publisher side...\n> >\n> > Executing DMLs on foreign tables does not generate any WAL AFAIK with\n> > the backend core code, even with wal_level = logical, as the DML is\n> > executed within the FDW callback (see just ExecUpdate() or\n> > ExecInsert() in nodeModifyTable.c), and foreign tables don't have an\n> > AM set as they have no physical storage. A FDW may decide to generate\n> > some WAL records by itself though when doing the opeation, using the\n> > generic WAL interface but that's rather limited.\n> >\n> > Generating WAL for the truncation of foreign tables sounds also like a\n> > strange concept to me. I think that you should just make the patch\n> > work so as the truncation is passed down to the FDW that decides what\n> > it needs to do with it, and do nothing more than that.\n>\n> Agreed.\n>\nOk, it's fair enough.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Tue, 30 Mar 2021 09:29:46 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年3月30日(火) 3:45 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/03/28 2:37, Kazutaka Onishi wrote:\n> > Fujii-san,\n> >\n> > Thank you for your review!\n> > Now I prepare v5 patch and I'll answer to your each comment. please\n> > check this again.\n> > m(_ _)m\n> >\n> > 1. In postgres-fdw.sgml, \"and truncatable\" should be appended into the\n> > above first description?\n> > 2. truncate.sgml should be updated because, for example, it contains\n> > the above descriptions.\n> >\n> > Yeah, you're right. I've fixed it.\n> >\n> >\n> >\n> > 3. Don't we need to document the detail information about frels_extra?\n> >\n> > I've written about frels_extra into fdwhander.sgml.\n> >\n> >\n> >\n> > 4. postgres_fdw determines whether to specify ONLY or not by checking\n> > whether the passed extra value is zero or not.\n> >\n> > Please refer this:\n> > https://www.postgresql.org/message-id/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com\n> >> Negative value means that foreign-tables are not specified in the TRUNCATE\n> >> command, but truncated due to dependency (like partition's child leaf).\n> >\n> > I've added this information into fdwhandler.sgml.\n>\n> Even when a foreign table is specified explicitly in TRUNCATE command,\n> its extra value can be negative if it's found as an inherited children firstly\n> (i.e., in the case where the partitioned table having that foreign table as\n> its partition is specified explicitly in TRUNCATE command).\n> Isn't this a problem?\n>\n> Please imagine the following example;\n>\n> ----------------------------------\n> create extension postgres_fdw;\n> create server loopback foreign data wrapper postgres_fdw;\n> create user mapping for public server loopback;\n>\n> create table t (i int, j int) partition by hash (j);\n> create table t0 partition of t for values with (modulus 2, remainder 0);\n> create table t1 partition of t for values with (modulus 2, remainder 1);\n>\n> create table test (i int, j int) partition by hash (i);\n> create table test0 partition of test for values with (modulus 2, remainder 0);\n> create foreign table ft partition of test for values with (modulus 2, remainder 1) server loopback options (table_name 't');\n> ----------------------------------\n>\n> In this example, \"truncate ft, test\" works fine, but \"truncate test, ft\" causes\n> an error though they should work in the same way basically.\n>\n(Although it was originally designed by me...)\nIf frels_extra would be a bit-masked value, we can avoid the problem.\n\nPlease assume the three labels below:\n#define TRUNCATE_REL_CONTEXT__NORMAL 0x01\n#define TRUNCATE_REL_CONTEXT__ONLY 0x02\n#define TRUNCATE_REL_CONTEXT__CASCADED 0x04\n\nThen, assign these labels on the extra flag according to the context where\nthe foreign-tables appeared in the truncate command.\nEven if it is specified multiple times in the different context, FDW extension\ncan handle the best option according to the flags.\n\n> In this example, \"truncate ft, test\" works fine, but \"truncate test, ft\" causes\n\nIn both cases, ExecForeignTruncate shall be invoked to \"ft\" with\n(NORMAL | CASCADED),\nthus, postgres_fdw can determine the remote truncate command shall be\nexecuted without \"ONLY\" clause.\n\nHow about the idea?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Tue, 30 Mar 2021 10:11:30 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/03/30 10:11, Kohei KaiGai wrote:\n> 2021年3月30日(火) 3:45 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>>\n>> On 2021/03/28 2:37, Kazutaka Onishi wrote:\n>>> Fujii-san,\n>>>\n>>> Thank you for your review!\n>>> Now I prepare v5 patch and I'll answer to your each comment. please\n>>> check this again.\n>>> m(_ _)m\n>>>\n>>> 1. In postgres-fdw.sgml, \"and truncatable\" should be appended into the\n>>> above first description?\n>>> 2. truncate.sgml should be updated because, for example, it contains\n>>> the above descriptions.\n>>>\n>>> Yeah, you're right. I've fixed it.\n>>>\n>>>\n>>>\n>>> 3. Don't we need to document the detail information about frels_extra?\n>>>\n>>> I've written about frels_extra into fdwhander.sgml.\n>>>\n>>>\n>>>\n>>> 4. postgres_fdw determines whether to specify ONLY or not by checking\n>>> whether the passed extra value is zero or not.\n>>>\n>>> Please refer this:\n>>> https://www.postgresql.org/message-id/CAOP8fzb-t3WVNLjGMC%2B4sV4AZa9S%3DMAQ7Q6pQoADMCf_1jp4ew%40mail.gmail.com\n>>>> Negative value means that foreign-tables are not specified in the TRUNCATE\n>>>> command, but truncated due to dependency (like partition's child leaf).\n>>>\n>>> I've added this information into fdwhandler.sgml.\n>>\n>> Even when a foreign table is specified explicitly in TRUNCATE command,\n>> its extra value can be negative if it's found as an inherited children firstly\n>> (i.e., in the case where the partitioned table having that foreign table as\n>> its partition is specified explicitly in TRUNCATE command).\n>> Isn't this a problem?\n>>\n>> Please imagine the following example;\n>>\n>> ----------------------------------\n>> create extension postgres_fdw;\n>> create server loopback foreign data wrapper postgres_fdw;\n>> create user mapping for public server loopback;\n>>\n>> create table t (i int, j int) partition by hash (j);\n>> create table t0 partition of t for values with (modulus 2, remainder 0);\n>> create table t1 partition of t for values with (modulus 2, remainder 1);\n>>\n>> create table test (i int, j int) partition by hash (i);\n>> create table test0 partition of test for values with (modulus 2, remainder 0);\n>> create foreign table ft partition of test for values with (modulus 2, remainder 1) server loopback options (table_name 't');\n>> ----------------------------------\n>>\n>> In this example, \"truncate ft, test\" works fine, but \"truncate test, ft\" causes\n>> an error though they should work in the same way basically.\n>>\n> (Although it was originally designed by me...)\n> If frels_extra would be a bit-masked value, we can avoid the problem.\n> \n> Please assume the three labels below:\n> #define TRUNCATE_REL_CONTEXT__NORMAL 0x01\n> #define TRUNCATE_REL_CONTEXT__ONLY 0x02\n> #define TRUNCATE_REL_CONTEXT__CASCADED 0x04\n> \n> Then, assign these labels on the extra flag according to the context where\n> the foreign-tables appeared in the truncate command.\n> Even if it is specified multiple times in the different context, FDW extension\n> can handle the best option according to the flags.\n> \n>> In this example, \"truncate ft, test\" works fine, but \"truncate test, ft\" causes\n> \n> In both cases, ExecForeignTruncate shall be invoked to \"ft\" with\n> (NORMAL | CASCADED),\n> thus, postgres_fdw can determine the remote truncate command shall be\n> executed without \"ONLY\" clause.\n> \n> How about the idea?\n\nThis idea looks better to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Mar 2021 15:29:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年3月30日(火) 2:53 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/03/28 2:37, Kazutaka Onishi wrote:\n> > Fujii-san,\n> >\n> > Thank you for your review!\n> > Now I prepare v5 patch and I'll answer to your each comment. please\n> > check this again.\n>\n> Thanks a lot!\n>\n> > 5. For example, we can easily do that by truncate foreign tables\n> > before local ones. Thought?\n> >\n> > Umm... yeah, I feel it's better procedure, but not so required because\n> > TRUNCATE is NOT called frequently.\n> > Certainly, we already have postgresIsForeignUpdatable() to check\n> > whether the foreign table is updatable or not.\n> > Following this way, we have to add postgresIsForeignTruncatable() to check.\n> > However, Unlike UPDATE, TRUNCATE is NOT called frequently. Current\n> > procedure is inefficient but works correctly.\n> > Thus, I feel postgresIsForeignTruncatable() is not needed.\n>\n> I'm concerned about the case where permission errors at the remote servers\n> rather than that truncatable option is disabled. The comments of\n> ExecuteTruncate() explains its design as follows. But the patch seems to break\n> this because it truncates the local tables before checking the permission on\n> foreign tables (i.e., the local tables in remote servers)... No?\n>\n> We first open and grab exclusive\n> lock on all relations involved, checking permissions and otherwise\n> verifying that the relation is OK for truncation\n> Finally all the relations are truncated and reindexed.\n>\nFujii-san,\n\nWhat does the \"permission checks\" mean in this context?\nThe permission checks on the foreign tables involved are already checked\nat truncate_check_rel(), by PostgreSQL's standard access control.\n\nPlease assume an extreme example below:\n1. I defined a foreign table with file_fdw onto a local CSV file.\n2. Someone tries to scan the foreign table, and ACL allows it.\n3. I disallow the read remission of the CSV using chmod, after the above step,\n but prior to the query execution.\n4. Someone's query moved to the execution stage, then IterateForeignScan()\n raises an error due to OS level permission checks.\n\nFDW is a mechanism to convert from/to external data sources to/from PostgreSQL's\nstructured data, as literal. Once we checked the permissions of\nforeign-tables by\ndatabase ACLs, any other permission checks handled by FDW driver are a part of\nexecution (like, OS permission check when file_fdw read(2) the\nunderlying CSV files).\nAnd, we have no reliable way to check entire permissions preliminary,\nlike OS file\npermission check or database permission checks by remote server. Even\nif a checker\nroutine returned a \"hint\", it may be changed at the execution time.\nSomebody might\nchange the \"truncate\" permission at the remote server.\n\nHow about your opinions?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 1 Apr 2021 00:09:01 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/01 0:09, Kohei KaiGai wrote:\n> What does the \"permission checks\" mean in this context?\n> The permission checks on the foreign tables involved are already checked\n> at truncate_check_rel(), by PostgreSQL's standard access control.\n\nI meant that's the permission check that happens in the remote server side.\nFor example, when the foreign table is defined on postgres_fdw and truncated,\nTRUNCATE command is issued to the remote server via postgres_fdw and\nit checks the permission of the table before performing actual truncation.\n\n\n> Please assume an extreme example below:\n> 1. I defined a foreign table with file_fdw onto a local CSV file.\n> 2. Someone tries to scan the foreign table, and ACL allows it.\n> 3. I disallow the read remission of the CSV using chmod, after the above step,\n> but prior to the query execution.\n> 4. Someone's query moved to the execution stage, then IterateForeignScan()\n> raises an error due to OS level permission checks.\n> \n> FDW is a mechanism to convert from/to external data sources to/from PostgreSQL's\n> structured data, as literal. Once we checked the permissions of\n> foreign-tables by\n> database ACLs, any other permission checks handled by FDW driver are a part of\n> execution (like, OS permission check when file_fdw read(2) the\n> underlying CSV files).\n> And, we have no reliable way to check entire permissions preliminary,\n> like OS file\n> permission check or database permission checks by remote server. Even\n> if a checker\n> routine returned a \"hint\", it may be changed at the execution time.\n> Somebody might\n> change the \"truncate\" permission at the remote server.\n> \n> How about your opinions?\n\nI agree that something like checker routine might not be so useful and\nalso be overkill. I was thinking that it's better to truncate the foreign tables first\nand the local ones later. Otherwise it's a waste of cycles to truncate\nthe local tables if the truncation on foreign table causes an permission error\non the remote server.\n\nBut ISTM that the order of tables to truncate that the current patch provides\ndoesn't cause any actual bugs. So if you think the current order is better,\nI'm ok with that for now. In this case, the comments for ExecuteTruncate()\nshould be updated.\n\nBTW, the latest patch doesn't seem to be applied cleanly to the master\nbecause of commit 27e1f14563. Could you rebase it?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 1 Apr 2021 18:53:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月1日(木) 18:53 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/01 0:09, Kohei KaiGai wrote:\n> > What does the \"permission checks\" mean in this context?\n> > The permission checks on the foreign tables involved are already checked\n> > at truncate_check_rel(), by PostgreSQL's standard access control.\n>\n> I meant that's the permission check that happens in the remote server side.\n> For example, when the foreign table is defined on postgres_fdw and truncated,\n> TRUNCATE command is issued to the remote server via postgres_fdw and\n> it checks the permission of the table before performing actual truncation.\n>\n>\n> > Please assume an extreme example below:\n> > 1. I defined a foreign table with file_fdw onto a local CSV file.\n> > 2. Someone tries to scan the foreign table, and ACL allows it.\n> > 3. I disallow the read remission of the CSV using chmod, after the above step,\n> > but prior to the query execution.\n> > 4. Someone's query moved to the execution stage, then IterateForeignScan()\n> > raises an error due to OS level permission checks.\n> >\n> > FDW is a mechanism to convert from/to external data sources to/from PostgreSQL's\n> > structured data, as literal. Once we checked the permissions of\n> > foreign-tables by\n> > database ACLs, any other permission checks handled by FDW driver are a part of\n> > execution (like, OS permission check when file_fdw read(2) the\n> > underlying CSV files).\n> > And, we have no reliable way to check entire permissions preliminary,\n> > like OS file\n> > permission check or database permission checks by remote server. Even\n> > if a checker\n> > routine returned a \"hint\", it may be changed at the execution time.\n> > Somebody might\n> > change the \"truncate\" permission at the remote server.\n> >\n> > How about your opinions?\n>\n> I agree that something like checker routine might not be so useful and\n> also be overkill. I was thinking that it's better to truncate the foreign tables first\n> and the local ones later. Otherwise it's a waste of cycles to truncate\n> the local tables if the truncation on foreign table causes an permission error\n> on the remote server.\n>\n> But ISTM that the order of tables to truncate that the current patch provides\n> doesn't cause any actual bugs. So if you think the current order is better,\n> I'm ok with that for now. In this case, the comments for ExecuteTruncate()\n> should be updated.\n>\nIt is fair enough for me to reverse the order of actual truncation.\n\nHow about the updated comments below?\n\n This is a multi-relation truncate. We first open and grab exclusive\n lock on all relations involved, checking permissions (local database\n ACLs even if relations are foreign-tables) and otherwise verifying\n that the relation is OK for truncation. In CASCADE mode, ...(snip)...\n Finally all the relations are truncated and reindexed. If any foreign-\n tables are involved, its callback shall be invoked prior to the truncation\n of regular tables.\n\n> BTW, the latest patch doesn't seem to be applied cleanly to the master\n> because of commit 27e1f14563. Could you rebase it?\n>\nOnishi-san, go ahead. :-)\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Fri, 2 Apr 2021 09:37:50 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/02 9:37, Kohei KaiGai wrote:\n> It is fair enough for me to reverse the order of actual truncation.\n> \n> How about the updated comments below?\n> \n> This is a multi-relation truncate. We first open and grab exclusive\n> lock on all relations involved, checking permissions (local database\n> ACLs even if relations are foreign-tables) and otherwise verifying\n> that the relation is OK for truncation. In CASCADE mode, ...(snip)...\n> Finally all the relations are truncated and reindexed. If any foreign-\n> tables are involved, its callback shall be invoked prior to the truncation\n> of regular tables.\n\nLGTM.\n\n\n>> BTW, the latest patch doesn't seem to be applied cleanly to the master\n>> because of commit 27e1f14563. Could you rebase it?\n>>\n> Onishi-san, go ahead. :-)\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 2 Apr 2021 11:43:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "All,\n\nThank you for discussion.\nI've updated the patch (v6->v7) according to the conclusion.\n\nI'll show the modified points:\n1. Comments for ExecuteTuncate()\n2. Replacing extra value in frels_extra with integer to label.\n3. Skipping XLOG_HEAP_TRUNCATE on foreign table\n\nRegards,\n\n2021年4月2日(金) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n>\n>\n> On 2021/04/02 9:37, Kohei KaiGai wrote:\n> > It is fair enough for me to reverse the order of actual truncation.\n> >\n> > How about the updated comments below?\n> >\n> > This is a multi-relation truncate. We first open and grab exclusive\n> > lock on all relations involved, checking permissions (local database\n> > ACLs even if relations are foreign-tables) and otherwise verifying\n> > that the relation is OK for truncation. In CASCADE mode, ...(snip)...\n> > Finally all the relations are truncated and reindexed. If any foreign-\n> > tables are involved, its callback shall be invoked prior to the truncation\n> > of regular tables.\n>\n> LGTM.\n>\n>\n> >> BTW, the latest patch doesn't seem to be applied cleanly to the master\n> >> because of commit 27e1f14563. Could you rebase it?\n> >>\n> > Onishi-san, go ahead. :-)\n>\n> +1\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION", "msg_date": "Sat, 3 Apr 2021 09:53:57 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Sorry but I found the v7 patch has typo and it can't be built...\nI attached fixed one(v8).\n\n2021年4月3日(土) 9:53 Kazutaka Onishi <onishi@heterodb.com>:\n>\n> All,\n>\n> Thank you for discussion.\n> I've updated the patch (v6->v7) according to the conclusion.\n>\n> I'll show the modified points:\n> 1. Comments for ExecuteTuncate()\n> 2. Replacing extra value in frels_extra with integer to label.\n> 3. Skipping XLOG_HEAP_TRUNCATE on foreign table\n>\n> Regards,\n>\n> 2021年4月2日(金) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >\n> >\n> >\n> > On 2021/04/02 9:37, Kohei KaiGai wrote:\n> > > It is fair enough for me to reverse the order of actual truncation.\n> > >\n> > > How about the updated comments below?\n> > >\n> > > This is a multi-relation truncate. We first open and grab exclusive\n> > > lock on all relations involved, checking permissions (local database\n> > > ACLs even if relations are foreign-tables) and otherwise verifying\n> > > that the relation is OK for truncation. In CASCADE mode, ...(snip)...\n> > > Finally all the relations are truncated and reindexed. If any foreign-\n> > > tables are involved, its callback shall be invoked prior to the truncation\n> > > of regular tables.\n> >\n> > LGTM.\n> >\n> >\n> > >> BTW, the latest patch doesn't seem to be applied cleanly to the master\n> > >> because of commit 27e1f14563. Could you rebase it?\n> > >>\n> > > Onishi-san, go ahead. :-)\n> >\n> > +1\n> >\n> > Regards,\n> >\n> > --\n> > Fujii Masao\n> > Advanced Computing Technology Center\n> > Research and Development Headquarters\n> > NTT DATA CORPORATION", "msg_date": "Sat, 3 Apr 2021 22:46:40 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sat, Apr 3, 2021 at 7:16 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> Sorry but I found the v7 patch has typo and it can't be built...\n> I attached fixed one(v8).\n\nThanks for the patch. Here are some comments on v8 patch:\n1) We usually have the struct name after \"+typedef struct\nForeignTruncateInfo\", please refer to other struct defs in the code\nbase.\n\n2) We should add ORDER BY clause(probably ORDER BY id?) for data\ngenerating select queries in added tests, otherwise tests might become\nunstable.\n\n3) How about dropping the tables, foreign tables that got created for\ntesting in postgres_fdw.sql?\n\n4) I think it's not \"foreign-tables\"/\"foreign-table\", it can be\n\"foreign tables\"/\"foreign table\", other places in the docs use this\nconvention.\n+ the context where the foreign-tables are truncated. It is a list\nof integers and same length with\n\n5) Can't we use do_sql_command function after making it non static? We\ncould go extra mile, that is we could make do_sql_command little more\ngeneric by passing some enum for each of PQsendQuery,\nPQsendQueryParams, PQsendQueryPrepared and PQsendPrepare and replace\nthe respective code chunks with do_sql_command in postgres_fdw.c.\n\n+ /* run remote query */\n+ if (!PQsendQuery(conn, sql.data))\n+ pgfdw_report_error(ERROR, NULL, conn, false, sql.data);\n+ res = pgfdw_get_result(conn, sql.data);\n+ if (PQresultStatus(res) != PGRES_COMMAND_OK)\n+ pgfdw_report_error(ERROR, res, conn, true, sql.data);\n+ /* clean-up */\n+ PQclear(res);\n\n6) A white space error when the patch is applied.\ncontrib/postgres_fdw/postgres_fdw.c:2913: trailing whitespace.\n+\n\n7) I may be missing something here. Why do we need a hash table at\nall? We could just do it with a linked list right? Is there a specific\nreason to use a hash table? IIUC, the hash table entries will be lying\naround until the local session exists since we are not doing\nhash_destroy.\n\n8) How about having something like this?\n+ <command>TRUNCATE</command> can be used for foreign tables if the\nforeign data wrapper supports, for instance, see <xref\nlinkend=\"postgres-fdw\"/>.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 3 Apr 2021 20:04:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Hi,\n+ <command>TRUNCATE</command> for each foreign server being involved\n+ in one <command>TRUNCATE</command> command (note that invocations\n\nThe 'being' in above sentence can be omitted.\n\n+ the context where the foreign-tables are truncated. It is a list of\nintegers and same length with\n\nThere should be a verb between 'and' and same :\nIt is a list of integers and has same length with\n\n+ * Information related to truncation of foreign tables. This is used for\n+ * the elements in a hash table *that* uses the server OID as lookup key,\n\nThe 'uses' is for 'This' (the struct), so 'that' should be 'and':\n\nthe elements in a hash table and uses\n\nAlternatively:\n\nthe elements in a hash table. It uses\n\n+ relids_extra = lappend_int(relids_extra, (recurse ?\nTRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));\n\nI am curious: isn't one underscore enough in the identifier (before NORMAL\nand ONLY) ?\n\nI suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and\nTRUNCATE_REL_CONTEXT_ONLY\n\nCheers\n\nOn Sat, Apr 3, 2021 at 6:46 AM Kazutaka Onishi <onishi@heterodb.com> wrote:\n\n> Sorry but I found the v7 patch has typo and it can't be built...\n> I attached fixed one(v8).\n>\n> 2021年4月3日(土) 9:53 Kazutaka Onishi <onishi@heterodb.com>:\n> >\n> > All,\n> >\n> > Thank you for discussion.\n> > I've updated the patch (v6->v7) according to the conclusion.\n> >\n> > I'll show the modified points:\n> > 1. Comments for ExecuteTuncate()\n> > 2. Replacing extra value in frels_extra with integer to label.\n> > 3. Skipping XLOG_HEAP_TRUNCATE on foreign table\n> >\n> > Regards,\n> >\n> > 2021年4月2日(金) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> > >\n> > >\n> > >\n> > > On 2021/04/02 9:37, Kohei KaiGai wrote:\n> > > > It is fair enough for me to reverse the order of actual truncation.\n> > > >\n> > > > How about the updated comments below?\n> > > >\n> > > > This is a multi-relation truncate. We first open and grab\n> exclusive\n> > > > lock on all relations involved, checking permissions (local\n> database\n> > > > ACLs even if relations are foreign-tables) and otherwise\n> verifying\n> > > > that the relation is OK for truncation. In CASCADE mode,\n> ...(snip)...\n> > > > Finally all the relations are truncated and reindexed. If any\n> foreign-\n> > > > tables are involved, its callback shall be invoked prior to the\n> truncation\n> > > > of regular tables.\n> > >\n> > > LGTM.\n> > >\n> > >\n> > > >> BTW, the latest patch doesn't seem to be applied cleanly to the\n> master\n> > > >> because of commit 27e1f14563. Could you rebase it?\n> > > >>\n> > > > Onishi-san, go ahead. :-)\n> > >\n> > > +1\n> > >\n> > > Regards,\n> > >\n> > > --\n> > > Fujii Masao\n> > > Advanced Computing Technology Center\n> > > Research and Development Headquarters\n> > > NTT DATA CORPORATION\n>\n\nHi,+     <command>TRUNCATE</command> for each foreign server being involved+     in one <command>TRUNCATE</command> command (note that invocationsThe 'being' in above sentence can be omitted.+     the context where the foreign-tables are truncated. It is a list of integers and same length withThere should be a verb between 'and' and same :It is a list of integers and has same length with+ * Information related to truncation of foreign tables.  This is used for+ * the elements in a hash table that uses the server OID as lookup key,The 'uses' is for 'This' (the struct), so 'that' should be 'and':the elements in a hash table and usesAlternatively:the elements in a hash table. It uses+       relids_extra = lappend_int(relids_extra, (recurse ? TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));I am curious: isn't one underscore enough in the identifier (before NORMAL and ONLY) ?I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and TRUNCATE_REL_CONTEXT_ONLYCheersOn Sat, Apr 3, 2021 at 6:46 AM Kazutaka Onishi <onishi@heterodb.com> wrote:Sorry but I found the v7 patch has typo and it can't be built...\nI attached fixed one(v8).\n\n2021年4月3日(土) 9:53 Kazutaka Onishi <onishi@heterodb.com>:\n>\n> All,\n>\n> Thank you for discussion.\n> I've updated the patch (v6->v7) according to the conclusion.\n>\n> I'll show the modified points:\n> 1. Comments for ExecuteTuncate()\n> 2. Replacing extra value in frels_extra with integer to label.\n> 3. Skipping XLOG_HEAP_TRUNCATE on foreign table\n>\n> Regards,\n>\n> 2021年4月2日(金) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >\n> >\n> >\n> > On 2021/04/02 9:37, Kohei KaiGai wrote:\n> > > It is fair enough for me to reverse the order of actual truncation.\n> > >\n> > > How about the updated comments below?\n> > >\n> > >      This is a multi-relation truncate.  We first open and grab exclusive\n> > >      lock on all relations involved, checking permissions (local database\n> > >      ACLs even if relations are foreign-tables) and otherwise verifying\n> > >      that the relation is OK for truncation. In CASCADE mode, ...(snip)...\n> > >      Finally all the relations are truncated and reindexed. If any foreign-\n> > >      tables are involved, its callback shall be invoked prior to the truncation\n> > >      of regular tables.\n> >\n> > LGTM.\n> >\n> >\n> > >> BTW, the latest patch doesn't seem to be applied cleanly to the master\n> > >> because of commit 27e1f14563. Could you rebase it?\n> > >>\n> > > Onishi-san, go ahead. :-)\n> >\n> > +1\n> >\n> > Regards,\n> >\n> > --\n> > Fujii Masao\n> > Advanced Computing Technology Center\n> > Research and Development Headquarters\n> > NTT DATA CORPORATION", "msg_date": "Sat, 3 Apr 2021 07:38:50 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Continuing previous review...\n\n+ relids_extra = lappend_int(relids_extra,\nTRUNCATE_REL_CONTEXT__CASCADED);\n\nI wonder if TRUNCATE_REL_CONTEXT_CASCADING is better\nthan TRUNCATE_REL_CONTEXT__CASCADED. Note the removal of the extra\nunderscore.\nIn English, we say: truncation cascading to foreign table.\n\nw.r.t. Bharath's question on using hash table, I think the reason is that\nthe search would be more efficient:\n\n+ ft_info = hash_search(ft_htab, &server_oid, HASH_ENTER, &found);\nand\n+ while ((ft_info = hash_seq_search(&seq)) != NULL)\n\n\n+ * Now go through the hash table, and process each entry associated to\nthe\n+ * servers involved in the TRUNCATE.\n\nassociated to -> associated with\n\nShould the hash table be released at the end of ExecuteTruncateGuts() ?\n\nCheers\n\nOn Sat, Apr 3, 2021 at 7:38 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> + <command>TRUNCATE</command> for each foreign server being involved\n> + in one <command>TRUNCATE</command> command (note that invocations\n>\n> The 'being' in above sentence can be omitted.\n>\n> + the context where the foreign-tables are truncated. It is a list of\n> integers and same length with\n>\n> There should be a verb between 'and' and same :\n> It is a list of integers and has same length with\n>\n> + * Information related to truncation of foreign tables. This is used for\n> + * the elements in a hash table *that* uses the server OID as lookup key,\n>\n> The 'uses' is for 'This' (the struct), so 'that' should be 'and':\n>\n> the elements in a hash table and uses\n>\n> Alternatively:\n>\n> the elements in a hash table. It uses\n>\n> + relids_extra = lappend_int(relids_extra, (recurse ?\n> TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));\n>\n> I am curious: isn't one underscore enough in the identifier (before NORMAL\n> and ONLY) ?\n>\n> I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and\n> TRUNCATE_REL_CONTEXT_ONLY\n>\n> Cheers\n>\n> On Sat, Apr 3, 2021 at 6:46 AM Kazutaka Onishi <onishi@heterodb.com>\n> wrote:\n>\n>> Sorry but I found the v7 patch has typo and it can't be built...\n>> I attached fixed one(v8).\n>>\n>> 2021年4月3日(土) 9:53 Kazutaka Onishi <onishi@heterodb.com>:\n>> >\n>> > All,\n>> >\n>> > Thank you for discussion.\n>> > I've updated the patch (v6->v7) according to the conclusion.\n>> >\n>> > I'll show the modified points:\n>> > 1. Comments for ExecuteTuncate()\n>> > 2. Replacing extra value in frels_extra with integer to label.\n>> > 3. Skipping XLOG_HEAP_TRUNCATE on foreign table\n>> >\n>> > Regards,\n>> >\n>> > 2021年4月2日(金) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>> > >\n>> > >\n>> > >\n>> > > On 2021/04/02 9:37, Kohei KaiGai wrote:\n>> > > > It is fair enough for me to reverse the order of actual truncation.\n>> > > >\n>> > > > How about the updated comments below?\n>> > > >\n>> > > > This is a multi-relation truncate. We first open and grab\n>> exclusive\n>> > > > lock on all relations involved, checking permissions (local\n>> database\n>> > > > ACLs even if relations are foreign-tables) and otherwise\n>> verifying\n>> > > > that the relation is OK for truncation. In CASCADE mode,\n>> ...(snip)...\n>> > > > Finally all the relations are truncated and reindexed. If any\n>> foreign-\n>> > > > tables are involved, its callback shall be invoked prior to\n>> the truncation\n>> > > > of regular tables.\n>> > >\n>> > > LGTM.\n>> > >\n>> > >\n>> > > >> BTW, the latest patch doesn't seem to be applied cleanly to the\n>> master\n>> > > >> because of commit 27e1f14563. Could you rebase it?\n>> > > >>\n>> > > > Onishi-san, go ahead. :-)\n>> > >\n>> > > +1\n>> > >\n>> > > Regards,\n>> > >\n>> > > --\n>> > > Fujii Masao\n>> > > Advanced Computing Technology Center\n>> > > Research and Development Headquarters\n>> > > NTT DATA CORPORATION\n>>\n>\n\nContinuing previous review...+               relids_extra = lappend_int(relids_extra, TRUNCATE_REL_CONTEXT__CASCADED);I wonder if TRUNCATE_REL_CONTEXT_CASCADING is better than TRUNCATE_REL_CONTEXT__CASCADED. Note the removal of the extra underscore.In English, we say: truncation cascading to foreign table.w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:+           ft_info = hash_search(ft_htab, &server_oid, HASH_ENTER, &found);and+       while ((ft_info = hash_seq_search(&seq)) != NULL)+    * Now go through the hash table, and process each entry associated to the+    * servers involved in the TRUNCATE.associated to -> associated withShould the hash table be released at the end of ExecuteTruncateGuts() ?CheersOn Sat, Apr 3, 2021 at 7:38 AM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,+     <command>TRUNCATE</command> for each foreign server being involved+     in one <command>TRUNCATE</command> command (note that invocationsThe 'being' in above sentence can be omitted.+     the context where the foreign-tables are truncated. It is a list of integers and same length withThere should be a verb between 'and' and same :It is a list of integers and has same length with+ * Information related to truncation of foreign tables.  This is used for+ * the elements in a hash table that uses the server OID as lookup key,The 'uses' is for 'This' (the struct), so 'that' should be 'and':the elements in a hash table and usesAlternatively:the elements in a hash table. It uses+       relids_extra = lappend_int(relids_extra, (recurse ? TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));I am curious: isn't one underscore enough in the identifier (before NORMAL and ONLY) ?I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and TRUNCATE_REL_CONTEXT_ONLYCheersOn Sat, Apr 3, 2021 at 6:46 AM Kazutaka Onishi <onishi@heterodb.com> wrote:Sorry but I found the v7 patch has typo and it can't be built...\nI attached fixed one(v8).\n\n2021年4月3日(土) 9:53 Kazutaka Onishi <onishi@heterodb.com>:\n>\n> All,\n>\n> Thank you for discussion.\n> I've updated the patch (v6->v7) according to the conclusion.\n>\n> I'll show the modified points:\n> 1. Comments for ExecuteTuncate()\n> 2. Replacing extra value in frels_extra with integer to label.\n> 3. Skipping XLOG_HEAP_TRUNCATE on foreign table\n>\n> Regards,\n>\n> 2021年4月2日(金) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >\n> >\n> >\n> > On 2021/04/02 9:37, Kohei KaiGai wrote:\n> > > It is fair enough for me to reverse the order of actual truncation.\n> > >\n> > > How about the updated comments below?\n> > >\n> > >      This is a multi-relation truncate.  We first open and grab exclusive\n> > >      lock on all relations involved, checking permissions (local database\n> > >      ACLs even if relations are foreign-tables) and otherwise verifying\n> > >      that the relation is OK for truncation. In CASCADE mode, ...(snip)...\n> > >      Finally all the relations are truncated and reindexed. If any foreign-\n> > >      tables are involved, its callback shall be invoked prior to the truncation\n> > >      of regular tables.\n> >\n> > LGTM.\n> >\n> >\n> > >> BTW, the latest patch doesn't seem to be applied cleanly to the master\n> > >> because of commit 27e1f14563. Could you rebase it?\n> > >>\n> > > Onishi-san, go ahead. :-)\n> >\n> > +1\n> >\n> > Regards,\n> >\n> > --\n> > Fujii Masao\n> > Advanced Computing Technology Center\n> > Research and Development Headquarters\n> > NTT DATA CORPORATION", "msg_date": "Sat, 3 Apr 2021 08:04:51 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sat, Apr 3, 2021 at 8:31 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:\n\nGenerally, sequential search would be slower if there are many entries\nin a list. Here, the use case is to store all the foreign table ids\nassociated with each foreign server and I'm not sure how many foreign\ntables will be provided in a single truncate command that belong to\ndifferent foreign servers. I strongly feel the count will be less and\nusing a list would be easier than to have a hash table. Others may\nhave better opinions.\n\n> Should the hash table be released at the end of ExecuteTruncateGuts() ?\n\nIf we go with a hash table and think that the frequency of \"TRUNCATE\"\ncommands on foreign tables is heavy in a local session, then it does\nmake sense to not destroy the hash, otherwise destroy the hash.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Apr 2021 09:37:46 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月4日(日) 13:07 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Sat, Apr 3, 2021 at 8:31 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:\n>\n> Generally, sequential search would be slower if there are many entries\n> in a list. Here, the use case is to store all the foreign table ids\n> associated with each foreign server and I'm not sure how many foreign\n> tables will be provided in a single truncate command that belong to\n> different foreign servers. I strongly feel the count will be less and\n> using a list would be easier than to have a hash table. Others may\n> have better opinions.\n>\nhttps://www.postgresql.org/message-id/20200115081126.GK2243@paquier.xyz\n\nIt was originally implemented using a simple list, then modified according to\nthe comment by Michael.\nI think it is just a matter of preference.\n\n> > Should the hash table be released at the end of ExecuteTruncateGuts() ?\n>\n> If we go with a hash table and think that the frequency of \"TRUNCATE\"\n> commands on foreign tables is heavy in a local session, then it does\n> make sense to not destroy the hash, otherwise destroy the hash.\n>\nIn most cases, TRUNCATE is not a command frequently executed.\nSo, exactly, it is just a matter of preference.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Sun, 4 Apr 2021 14:13:39 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Hi\n\nFor now, I've fixed the v8 according to your comments, excluding\nanything related with 'hash table' and 'do_sql_commands'.\n\n> 1) We usually have the struct name after \"+typedef struct\n> ForeignTruncateInfo\", please refer to other struct defs in the code\n> base.\n\nI've modified the definition.\nBy the way, there're many \"typedef struct{ ... }NameOfStruct;\" in\ncodes, about 40% of other struct defs (checked by find&grep),\nthus I felt the way is not \"MUST\".\n\n> 2) We should add ORDER BY clause(probably ORDER BY id?) for data\n> generating select queries in added tests, otherwise tests might become\n> unstable.\n\nI've added \"ORDER BY\" at the postges_fdw test.\n\n> 3) How about dropping the tables, foreign tables that got created for\n> testing in postgres_fdw.sql?\n\nI've added \"cleanup\" commands.\n\n> 4) I think it's not \"foreign-tables\"/\"foreign-table\", it can be\n> \"foreign tables\"/\"foreign table\", other places in the docs use this\n> convention.\n> + the context where the foreign-tables are truncated. It is a list\n> of integers and same length with\n\nI've replaced \"foreign-table\" to \"foreign table\".\n\n> 5) Can't we use do_sql_command function after making it non static? We\n> could go extra mile, that is we could make do_sql_command little more\n> generic by passing some enum for each of PQsendQuery,\n> PQsendQueryParams, PQsendQueryPrepared and PQsendPrepare and replace\n> the respective code chunks with do_sql_command in postgres_fdw.c.\n\nI've skipped this for now.\nI feel it sounds cool, but not easy.\nIt should be added by another patch because it's not only related to TRUNCATE.\n\n> 6) A white space error when the patch is applied.\n> contrib/postgres_fdw/postgres_fdw.c:2913: trailing whitespace.\n\nI've checked the patch and clean spaces.\nBut I can't confirmed this message by attaching(patch -p1 < ...) my v8 patch.\nIf this still occurs, please tell me how you attach the patch.\n\n> 7) I may be missing something here. Why do we need a hash table at\n> all? We could just do it with a linked list right? Is there a specific\n> reason to use a hash table? IIUC, the hash table entries will be lying\n> around until the local session exists since we are not doing\n> hash_destroy.\n\nI've skipped this for now.\n\n\n> 8) How about having something like this?\n> + <command>TRUNCATE</command> can be used for foreign tables if the\n> foreign data wrapper supports, for instance, see <xref\n> linkend=\"postgres-fdw\"/>.\n\nSounds good. I've added.\n\n\n9)\n> + <command>TRUNCATE</command> for each foreign server being involved\n>\n> + in one <command>TRUNCATE</command> command (note that invocations\n> The 'being' in above sentence can be omitted.\n\n I've fixed this.\n\n\n10)\n> + the context where the foreign-tables are truncated. It is a list of integers and same length with\n> There should be a verb between 'and' and same :\n> It is a list of integers and has same length with\n\nI've fixed this.\n\n11)\n> + * Information related to truncation of foreign tables. This is used for\n> + * the elements in a hash table that uses the server OID as lookup key,\n> The 'uses' is for 'This' (the struct), so 'that' should be 'and':\n> the elements in a hash table and uses\n> Alternatively:\n> the elements in a hash table. It uses\n\nI've fixed this.\n\n12)\n> + relids_extra = lappend_int(relids_extra, (recurse ? TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));\n> I am curious: isn't one underscore enough in the identifier (before NORMAL and ONLY) ?\n> I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and TRUNCATE_REL_CONTEXT_ONLY\n\n> + relids_extra = lappend_int(relids_extra, TRUNCATE_REL_CONTEXT__CASCADED);\n> I wonder if TRUNCATE_REL_CONTEXT_CASCADING is better than TRUNCATE_REL_CONTEXT__CASCADED. Note the removal of the extra underscore.\n> In English, we say: truncation cascading to foreign table.\n> w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:\n\nI've changed these labels shown below:\nTRUNCATE_REL_CONTEXT__NORMAL -> TRUNCATE_REL_CONTEXT_NORMAL\nTRUNCATE_REL_CONTEXT__ONLY -> TRUNCATE_REL_CONTEXT_ONLY\nTRUNCATE_REL_CONTEXT__CASCADED -> TRUNCATE_REL_CONTEXT_CASCADING\n\n14)\n> + ft_info = hash_search(ft_htab, &server_oid, HASH_ENTER, &found);\n> and\n> + while ((ft_info = hash_seq_search(&seq)) != NULL)\n> + * Now go through the hash table, and process each entry associated to the\n> + * servers involved in the TRUNCATE.\n> associated to -> associated with\n\nI've fixed this.\n\n14) Should the hash table be released at the end of ExecuteTruncateGuts() ?\n\nI've skipped this for now.\n\n2021年4月4日(日) 14:13 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> 2021年4月4日(日) 13:07 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n> >\n> > On Sat, Apr 3, 2021 at 8:31 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:\n> >\n> > Generally, sequential search would be slower if there are many entries\n> > in a list. Here, the use case is to store all the foreign table ids\n> > associated with each foreign server and I'm not sure how many foreign\n> > tables will be provided in a single truncate command that belong to\n> > different foreign servers. I strongly feel the count will be less and\n> > using a list would be easier than to have a hash table. Others may\n> > have better opinions.\n> >\n> https://www.postgresql.org/message-id/20200115081126.GK2243@paquier.xyz\n>\n> It was originally implemented using a simple list, then modified according to\n> the comment by Michael.\n> I think it is just a matter of preference.\n>\n> > > Should the hash table be released at the end of ExecuteTruncateGuts() ?\n> >\n> > If we go with a hash table and think that the frequency of \"TRUNCATE\"\n> > commands on foreign tables is heavy in a local session, then it does\n> > make sense to not destroy the hash, otherwise destroy the hash.\n> >\n> In most cases, TRUNCATE is not a command frequently executed.\n> So, exactly, it is just a matter of preference.\n>\n> Best regards,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>", "msg_date": "Sun, 4 Apr 2021 15:30:23 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "v9 has also typo because I haven't checked about doc... sorry.\n\n2021年4月4日(日) 15:30 Kazutaka Onishi <onishi@heterodb.com>:\n>\n> Hi\n>\n> For now, I've fixed the v8 according to your comments, excluding\n> anything related with 'hash table' and 'do_sql_commands'.\n>\n> > 1) We usually have the struct name after \"+typedef struct\n> > ForeignTruncateInfo\", please refer to other struct defs in the code\n> > base.\n>\n> I've modified the definition.\n> By the way, there're many \"typedef struct{ ... }NameOfStruct;\" in\n> codes, about 40% of other struct defs (checked by find&grep),\n> thus I felt the way is not \"MUST\".\n>\n> > 2) We should add ORDER BY clause(probably ORDER BY id?) for data\n> > generating select queries in added tests, otherwise tests might become\n> > unstable.\n>\n> I've added \"ORDER BY\" at the postges_fdw test.\n>\n> > 3) How about dropping the tables, foreign tables that got created for\n> > testing in postgres_fdw.sql?\n>\n> I've added \"cleanup\" commands.\n>\n> > 4) I think it's not \"foreign-tables\"/\"foreign-table\", it can be\n> > \"foreign tables\"/\"foreign table\", other places in the docs use this\n> > convention.\n> > + the context where the foreign-tables are truncated. It is a list\n> > of integers and same length with\n>\n> I've replaced \"foreign-table\" to \"foreign table\".\n>\n> > 5) Can't we use do_sql_command function after making it non static? We\n> > could go extra mile, that is we could make do_sql_command little more\n> > generic by passing some enum for each of PQsendQuery,\n> > PQsendQueryParams, PQsendQueryPrepared and PQsendPrepare and replace\n> > the respective code chunks with do_sql_command in postgres_fdw.c.\n>\n> I've skipped this for now.\n> I feel it sounds cool, but not easy.\n> It should be added by another patch because it's not only related to TRUNCATE.\n>\n> > 6) A white space error when the patch is applied.\n> > contrib/postgres_fdw/postgres_fdw.c:2913: trailing whitespace.\n>\n> I've checked the patch and clean spaces.\n> But I can't confirmed this message by attaching(patch -p1 < ...) my v8 patch.\n> If this still occurs, please tell me how you attach the patch.\n>\n> > 7) I may be missing something here. Why do we need a hash table at\n> > all? We could just do it with a linked list right? Is there a specific\n> > reason to use a hash table? IIUC, the hash table entries will be lying\n> > around until the local session exists since we are not doing\n> > hash_destroy.\n>\n> I've skipped this for now.\n>\n>\n> > 8) How about having something like this?\n> > + <command>TRUNCATE</command> can be used for foreign tables if the\n> > foreign data wrapper supports, for instance, see <xref\n> > linkend=\"postgres-fdw\"/>.\n>\n> Sounds good. I've added.\n>\n>\n> 9)\n> > + <command>TRUNCATE</command> for each foreign server being involved\n> >\n> > + in one <command>TRUNCATE</command> command (note that invocations\n> > The 'being' in above sentence can be omitted.\n>\n> I've fixed this.\n>\n>\n> 10)\n> > + the context where the foreign-tables are truncated. It is a list of integers and same length with\n> > There should be a verb between 'and' and same :\n> > It is a list of integers and has same length with\n>\n> I've fixed this.\n>\n> 11)\n> > + * Information related to truncation of foreign tables. This is used for\n> > + * the elements in a hash table that uses the server OID as lookup key,\n> > The 'uses' is for 'This' (the struct), so 'that' should be 'and':\n> > the elements in a hash table and uses\n> > Alternatively:\n> > the elements in a hash table. It uses\n>\n> I've fixed this.\n>\n> 12)\n> > + relids_extra = lappend_int(relids_extra, (recurse ? TRUNCATE_REL_CONTEXT__NORMAL : TRUNCATE_REL_CONTEXT__ONLY));\n> > I am curious: isn't one underscore enough in the identifier (before NORMAL and ONLY) ?\n> > I suggest naming them TRUNCATE_REL_CONTEXT_NORMAL and TRUNCATE_REL_CONTEXT_ONLY\n>\n> > + relids_extra = lappend_int(relids_extra, TRUNCATE_REL_CONTEXT__CASCADED);\n> > I wonder if TRUNCATE_REL_CONTEXT_CASCADING is better than TRUNCATE_REL_CONTEXT__CASCADED. Note the removal of the extra underscore.\n> > In English, we say: truncation cascading to foreign table.\n> > w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:\n>\n> I've changed these labels shown below:\n> TRUNCATE_REL_CONTEXT__NORMAL -> TRUNCATE_REL_CONTEXT_NORMAL\n> TRUNCATE_REL_CONTEXT__ONLY -> TRUNCATE_REL_CONTEXT_ONLY\n> TRUNCATE_REL_CONTEXT__CASCADED -> TRUNCATE_REL_CONTEXT_CASCADING\n>\n> 14)\n> > + ft_info = hash_search(ft_htab, &server_oid, HASH_ENTER, &found);\n> > and\n> > + while ((ft_info = hash_seq_search(&seq)) != NULL)\n> > + * Now go through the hash table, and process each entry associated to the\n> > + * servers involved in the TRUNCATE.\n> > associated to -> associated with\n>\n> I've fixed this.\n>\n> 14) Should the hash table be released at the end of ExecuteTruncateGuts() ?\n>\n> I've skipped this for now.\n>\n> 2021年4月4日(日) 14:13 Kohei KaiGai <kaigai@heterodb.com>:\n> >\n> > 2021年4月4日(日) 13:07 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n> > >\n> > > On Sat, Apr 3, 2021 at 8:31 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > > w.r.t. Bharath's question on using hash table, I think the reason is that the search would be more efficient:\n> > >\n> > > Generally, sequential search would be slower if there are many entries\n> > > in a list. Here, the use case is to store all the foreign table ids\n> > > associated with each foreign server and I'm not sure how many foreign\n> > > tables will be provided in a single truncate command that belong to\n> > > different foreign servers. I strongly feel the count will be less and\n> > > using a list would be easier than to have a hash table. Others may\n> > > have better opinions.\n> > >\n> > https://www.postgresql.org/message-id/20200115081126.GK2243@paquier.xyz\n> >\n> > It was originally implemented using a simple list, then modified according to\n> > the comment by Michael.\n> > I think it is just a matter of preference.\n> >\n> > > > Should the hash table be released at the end of ExecuteTruncateGuts() ?\n> > >\n> > > If we go with a hash table and think that the frequency of \"TRUNCATE\"\n> > > commands on foreign tables is heavy in a local session, then it does\n> > > make sense to not destroy the hash, otherwise destroy the hash.\n> > >\n> > In most cases, TRUNCATE is not a command frequently executed.\n> > So, exactly, it is just a matter of preference.\n> >\n> > Best regards,\n> > --\n> > HeteroDB, Inc / The PG-Strom Project\n> > KaiGai Kohei <kaigai@heterodb.com>", "msg_date": "Sun, 4 Apr 2021 16:18:06 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sun, Apr 4, 2021 at 12:48 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> v9 has also typo because I haven't checked about doc... sorry.\n\nI think v9 has some changes not related to the foreign table truncate\nfeature. If yes, please remove those changes and provide a proper\npatch.\n\ndiff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\ndiff --git a/src/backend/bootstrap/bootstrap.c\nb/src/backend/bootstrap/bootstrap.c\ndiff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\n....\n....\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Apr 2021 20:26:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Oops... sorry.\nI haven't merged my working git branch with remote master branch.\nPlease check this v11.\n\n2021年4月4日(日) 23:56 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Sun, Apr 4, 2021 at 12:48 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> >\n> > v9 has also typo because I haven't checked about doc... sorry.\n>\n> I think v9 has some changes not related to the foreign table truncate\n> feature. If yes, please remove those changes and provide a proper\n> patch.\n>\n> diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\n> diff --git a/src/backend/bootstrap/bootstrap.c\n> b/src/backend/bootstrap/bootstrap.c\n> diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\n> ....\n> ....\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Apr 2021 00:18:27 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sun, Apr 4, 2021 at 12:00 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > 5) Can't we use do_sql_command function after making it non static? We\n> > could go extra mile, that is we could make do_sql_command little more\n> > generic by passing some enum for each of PQsendQuery,\n> > PQsendQueryParams, PQsendQueryPrepared and PQsendPrepare and replace\n> > the respective code chunks with do_sql_command in postgres_fdw.c.\n>\n> I've skipped this for now.\n> I feel it sounds cool, but not easy.\n> It should be added by another patch because it's not only related to TRUNCATE.\n\nFair enough! I will give it a thought and provide a patch separately.\n\n> > 6) A white space error when the patch is applied.\n> > contrib/postgres_fdw/postgres_fdw.c:2913: trailing whitespace.\n>\n> I've checked the patch and clean spaces.\n> But I can't confirmed this message by attaching(patch -p1 < ...) my v8 patch.\n> If this still occurs, please tell me how you attach the patch.\n\nI usually follow these steps:\n1) write code 2) git diff --check (will give if there are any white\nspace or indentation errors) 3) git add -u 4) git commit (will enter a\ncommit message) 5) git format-patch -1 <<sha of the commit>> -v\n<<version number>> 6) to apply patch, git apply <<patch_name>>.patch\n\n> > 7) I may be missing something here. Why do we need a hash table at\n> > all? We could just do it with a linked list right? Is there a specific\n> > reason to use a hash table? IIUC, the hash table entries will be lying\n> > around until the local session exists since we are not doing\n> > hash_destroy.\n>\n> I've skipped this for now.\n\nIf you don't destroy the hash, you are going to cause a memory leak.\nBecause, the pointer to hash tableft_htab is local to\nExecuteTruncateGuts (note that it's not a static variable) and you are\ncreating a memory for the hash table and leaving the function without\ncleaning it up. IMHO, we should destroy the hash memory at the end of\nExecuteTruncateGuts.\n\nAnother comment for tests, why do we need to do full outer join of two\ntables to just show up there are some rows in the table? I would\nsuggest that all the tests introduced in the patch can be something\nlike this: 1) before truncate, just show the count(*) from the table\n2) truncate the foreign tables 3) after truncate, just show the\ncount(*) which should be 0. Because we don't care what the data is in\nthe tables, we only care about whether truncate is happened or not.\n\n+SELECT * FROM tru_ftable a FULL OUTER JOIN tru_pk_ftable b ON a.id =\nb.id ORDER BY a.id;\n+ id | x | id | z\n+----+----------------------------------+----+----------------------------------\n+ 1 | eccbc87e4b5ce2fe28308fd9f2a7baf3 | |\n+ 2 | a87ff679a2f3e71d9181a67b7542122c | |\n+ 3 | e4da3b7fbbce2345d7772b0674a318d5 | 3 | 1679091c5a880faf6fb5e6087eb1b2dc\n+ 4 | 1679091c5a880faf6fb5e6087eb1b2dc | 4 | 8f14e45fceea167a5a36dedd4bea2543\n+ 5 | 8f14e45fceea167a5a36dedd4bea2543 | 5 | c9f0f895fb98ab9159f51fd0297e236d\n+ 6 | c9f0f895fb98ab9159f51fd0297e236d | 6 | 45c48cce2e2d7fbdea1afc51c7c6ad26\n+ 7 | 45c48cce2e2d7fbdea1afc51c7c6ad26 | 7 | d3d9446802a44259755d38e6d163e820\n+ 8 | d3d9446802a44259755d38e6d163e820 | 8 | 6512bd43d9caa6e02c990b0a82652dca\n+ | | 9 | c20ad4d76fe97759aa27a0c99bff6710\n+ | | 10 | c51ce410c124a10e0db5e4b97fc2af39\n+(10 rows)\n+\n+TRUNCATE tru_ftable, tru_pk_ftable CASCADE;\n+SELECT * FROM tru_ftable a FULL OUTER JOIN tru_pk_ftable b ON a.id =\nb.id ORDER BY a.id; -- empty\n\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 4 Apr 2021 20:50:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Thank you for checking v11!\nI've updated it and attached v12.\n\n> I usually follow these steps:\n> 1) write code 2) git diff --check (will give if there are any white\n> space or indentation errors) 3) git add -u 4) git commit (will enter a\n> commit message) 5) git format-patch -1 <<sha of the commit>> -v\n> <<version number>> 6) to apply patch, git apply <<patch_name>>.patch\n\nthanks, I've removed these whitespaces and confirmed no warnings\noccurred when I run \"git apply <<patch_name>>.patch\"\n\n> If you don't destroy the hash, you are going to cause a memory leak.\n> Because, the pointer to hash tableft_htab is local to\n> ExecuteTruncateGuts (note that it's not a static variable) and you are\n> creating a memory for the hash table and leaving the function without\n> cleaning it up. IMHO, we should destroy the hash memory at the end of\n> ExecuteTruncateGuts.\n\nSure. I've added head_destroy().\n\n> Another comment for tests, why do we need to do full outer join of two\n> tables to just show up there are some rows in the table? I would\n> suggest that all the tests introduced in the patch can be something\n> like this: 1) before truncate, just show the count(*) from the table\n> 2) truncate the foreign tables 3) after truncate, just show the\n> count(*) which should be 0. Because we don't care what the data is in\n> the tables, we only care about whether truncate is happened or not.\n\nSure. I've replaced with the test command \"SELECT * FROM ...\" to\n\"SELECT COUNT(*) FROM ...\"\nHowever, for example, the \"id\" column is used to check after running\nTRUNCATE with ONLY clause to the inherited table.\nThus, I use \"sum(id)\" instead of \"count(*)\" to check the result when\nthe table has records.\n\n2021年4月5日(月) 0:20 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Sun, Apr 4, 2021 at 12:00 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > > 5) Can't we use do_sql_command function after making it non static? We\n> > > could go extra mile, that is we could make do_sql_command little more\n> > > generic by passing some enum for each of PQsendQuery,\n> > > PQsendQueryParams, PQsendQueryPrepared and PQsendPrepare and replace\n> > > the respective code chunks with do_sql_command in postgres_fdw.c.\n> >\n> > I've skipped this for now.\n> > I feel it sounds cool, but not easy.\n> > It should be added by another patch because it's not only related to TRUNCATE.\n>\n> Fair enough! I will give it a thought and provide a patch separately.\n>\n> > > 6) A white space error when the patch is applied.\n> > > contrib/postgres_fdw/postgres_fdw.c:2913: trailing whitespace.\n> >\n> > I've checked the patch and clean spaces.\n> > But I can't confirmed this message by attaching(patch -p1 < ...) my v8 patch.\n> > If this still occurs, please tell me how you attach the patch.\n>\n> I usually follow these steps:\n> 1) write code 2) git diff --check (will give if there are any white\n> space or indentation errors) 3) git add -u 4) git commit (will enter a\n> commit message) 5) git format-patch -1 <<sha of the commit>> -v\n> <<version number>> 6) to apply patch, git apply <<patch_name>>.patch\n>\n> > > 7) I may be missing something here. Why do we need a hash table at\n> > > all? We could just do it with a linked list right? Is there a specific\n> > > reason to use a hash table? IIUC, the hash table entries will be lying\n> > > around until the local session exists since we are not doing\n> > > hash_destroy.\n> >\n> > I've skipped this for now.\n>\n> If you don't destroy the hash, you are going to cause a memory leak.\n> Because, the pointer to hash tableft_htab is local to\n> ExecuteTruncateGuts (note that it's not a static variable) and you are\n> creating a memory for the hash table and leaving the function without\n> cleaning it up. IMHO, we should destroy the hash memory at the end of\n> ExecuteTruncateGuts.\n>\n> Another comment for tests, why do we need to do full outer join of two\n> tables to just show up there are some rows in the table? I would\n> suggest that all the tests introduced in the patch can be something\n> like this: 1) before truncate, just show the count(*) from the table\n> 2) truncate the foreign tables 3) after truncate, just show the\n> count(*) which should be 0. Because we don't care what the data is in\n> the tables, we only care about whether truncate is happened or not.\n>\n> +SELECT * FROM tru_ftable a FULL OUTER JOIN tru_pk_ftable b ON a.id =\n> b.id ORDER BY a.id;\n> + id | x | id | z\n> +----+----------------------------------+----+----------------------------------\n> + 1 | eccbc87e4b5ce2fe28308fd9f2a7baf3 | |\n> + 2 | a87ff679a2f3e71d9181a67b7542122c | |\n> + 3 | e4da3b7fbbce2345d7772b0674a318d5 | 3 | 1679091c5a880faf6fb5e6087eb1b2dc\n> + 4 | 1679091c5a880faf6fb5e6087eb1b2dc | 4 | 8f14e45fceea167a5a36dedd4bea2543\n> + 5 | 8f14e45fceea167a5a36dedd4bea2543 | 5 | c9f0f895fb98ab9159f51fd0297e236d\n> + 6 | c9f0f895fb98ab9159f51fd0297e236d | 6 | 45c48cce2e2d7fbdea1afc51c7c6ad26\n> + 7 | 45c48cce2e2d7fbdea1afc51c7c6ad26 | 7 | d3d9446802a44259755d38e6d163e820\n> + 8 | d3d9446802a44259755d38e6d163e820 | 8 | 6512bd43d9caa6e02c990b0a82652dca\n> + | | 9 | c20ad4d76fe97759aa27a0c99bff6710\n> + | | 10 | c51ce410c124a10e0db5e4b97fc2af39\n> +(10 rows)\n> +\n> +TRUNCATE tru_ftable, tru_pk_ftable CASCADE;\n> +SELECT * FROM tru_ftable a FULL OUTER JOIN tru_pk_ftable b ON a.id =\n> b.id ORDER BY a.id; -- empty\n>\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Apr 2021 01:53:18 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sun, Apr 4, 2021 at 10:23 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> Sure. I've replaced with the test command \"SELECT * FROM ...\" to\n> \"SELECT COUNT(*) FROM ...\"\n> However, for example, the \"id\" column is used to check after running\n> TRUNCATE with ONLY clause to the inherited table.\n> Thus, I use \"sum(id)\" instead of \"count(*)\" to check the result when\n> the table has records.\n\nI still don't understand why we need sum(id), not count(*). Am I\nmissing something here?\n\nHere are some more comments on the v12 patch:\n1) Instead of switch case, a simple if else clause would reduce the code a bit:\n if (behavior == DROP_RESTRICT)\n appendStringInfoString(buf, \" RESTRICT\");\n else if (behavior == DROP_CASCADE)\n appendStringInfoString(buf, \" CASCADE\");\n\n2) Some coding style comments:\nIt's better to have a new line after variable declarations,\nassignments, function calls, before if blocks, after if blocks for\nbetter readability of the code.\n+ appendStringInfoString(buf, \"TRUNCATE \"); ---> new line after this\n+ forboth (lc1, frels_list,\n\n+ } ---> new line after this\n+ appendStringInfo(buf, \" %s IDENTITY\",\n\n+ /* ensure the target foreign table is truncatable */\n+ truncatable = server_truncatable; ---> new line after this\n+ foreach (cell, ft->options)\n\n+ } ---> new line after this\n+ if (!truncatable)\n\n+ } ---> new line after this\n+ /* set up remote query */\n+ initStringInfo(&sql);\n+ deparseTruncateSql(&sql, frels_list, frels_extra, behavior,\nrestart_seqs); ---> new line after this\n+ /* run remote query */\n+ if (!PQsendQuery(conn, sql.data))\n+ pgfdw_report_error(ERROR, NULL, conn, false, sql.data);\n---> new line after this\n+ res = pgfdw_get_result(conn, sql.data); ---> new line after this\n+ if (PQresultStatus(res) != PGRES_COMMAND_OK)\n+ pgfdw_report_error(ERROR, res, conn, true, sql.data); --->\nnew line after this\n+ /* clean-up */\n+ PQclear(res);\n+ pfree(sql.data);\n+}\n\nand so on.\n\na space after \"false,\" and before \"NULL\"\n+ conn = GetConnection(user, false,NULL);\n\nbring lc2, frels_extra to the same of lc1, frels_list\n+ forboth (lc1, frels_list,\n+ lc2, frels_extra)\n\n3) I think we need truncatable behaviour that is consistent with updatable.\nWith your patch, seems like below is the behaviour for truncatable:\nboth server and foreign table are truncatable = truncated\nserver is not truncatable and foreign table is truncatable = not\ntruncated and error out\nserver is truncatable and foreign table is not truncatable = not\ntruncated and error out\nserver is not truncatable and foreign table is not truncatable = not\ntruncated and error out\n\nBelow is the behaviour for updatable:\nboth server and foreign table are updatable = updated\nserver is not updatable and foreign table is updatable = updated\nserver is updatable and foreign table is not updatable = not updated\nserver is not updatable and foreign table is not updatable = not updated\n\nAnd also see comment in postgresIsForeignRelUpdatable\n /*\n * By default, all postgres_fdw foreign tables are assumed updatable. This\n * can be overridden by a per-server setting, which in turn can be\n * overridden by a per-table setting.\n */\n\nIMO, you could do the same thing for truncatable, change is required\nin your patch:\nboth server and foreign table are truncatable = truncated\nserver is not truncatable and foreign table is truncatable = truncated\nserver is truncatable and foreign table is not truncatable = not\ntruncated and error out\nserver is not truncatable and foreign table is not truncatable = not\ntruncated and error out\n\n4) GetConnection needs to be done after all the error checking code\notherwise on error we would have opened a new connection without\nactually using it. Just move below code outside of the for loop in\npostgresExecForeignTruncate\n+ user = GetUserMapping(GetUserId(), server_id);\n+ conn = GetConnection(user, false,NULL);\nto here:\n+ Assert (OidIsValid(server_id)));\n+\n+ /* get a connection to the server */\n+ user = GetUserMapping(GetUserId(), server_id);\n+ conn = GetConnection(user, false, NULL);\n+\n+ /* set up remote query */\n+ initStringInfo(&sql);\n+ deparseTruncateSql(&sql, frels_list, frels_extra, behavior, restart_seqs);\n+ /* run remote query */\n+ if (!PQsendQuery(conn, sql.data))\n+ pgfdw_report_error(ERROR, NULL, conn, false, sql.data);\n+ res = pgfdw_get_result(conn, sql.data);\n+ if (PQresultStatus(res) != PGRES_COMMAND_OK)\n+ pgfdw_report_error(ERROR, res, conn, true, sql.data);\n\n5) This assertion is bogus, because GetForeignServerIdByRelId will\nreturn valid server id and otherwise it fails with \"cache lookup\nerror\", so please remove it.\n+ else\n+ {\n+ /* postgresExecForeignTruncate() is invoked for each server */\n+ Assert(server_id == GetForeignServerIdByRelId(frel_oid));\n+ }\n\n6) You can add a comment here saying this if-clause gets executed only\nonce per server.\n+\n+ if (!OidIsValid(server_id))\n+ {\n+ server_id = GetForeignServerIdByRelId(frel_oid);\n\n7) Did you try checking whether we reach hash_destroy code when a\nfailure happens on executing truncate on a remote table? Otherwise we\nmight want to do routine->ExecForeignTruncate inside PG_TRY block?\n+ /* truncate_check_rel() has checked that already */\n+ Assert(routine->ExecForeignTruncate != NULL);\n+\n+ routine->ExecForeignTruncate(ft_info->frels_list,\n+ ft_info->frels_extra,\n+ behavior,\n+ restart_seqs);\n+ }\n+\n+ hash_destroy(ft_htab);\n+ }\n\n8) This comment can be removed and have more descriptive comment\nbefore the for loop in postgresExecForeignTruncate similar to the\ncomment what we have in postgresIsForeignRelUpdatable for updatable.\n+ /* pick up remote connection, and sanity checks */\n\n9) It will be good if you can divide up your patch into 3 separate\npatches - 0001 code, 0002 tests, 0003 documentation\n\n10) Why do we need many new tables for tests? Can't we do this -\ninsert, show count(*) as non-zero, truncate, show count(*) as 0, again\ninsert, another truncate test? And we can also have a very minimal\nnumber of rows say 1 or 2 not 10? If possible, reduce the number of\nnew tables introduced. And do you have a specific reason to have a\ntext column in each of the tables? AFAICS, we don't care about the\ncolumn type, you could just have another int column and use\ngenerate_series while inserting. We can remove md5 function calls.\nYour tests will look clean.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Apr 2021 11:29:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Thank you for your comments.\nI've attached v13.\n\n> Here are some more comments on the v12 patch:\n> I still don't understand why we need sum(id), not count(*). Am I\n> missing something here?\n\nThe value of \"id\" is used for checking whether correct records are\ntruncated or not.\nFor instance, on \"truncate with ONLY clause\",\nAt first, There are 16 values in \"tru_ftable_parent\", for instance,\n[1,2,3,...,8,10,11,12,...,18].\nBy \"TRUNCATE ONLY tru_ftable_parent\", [1,2,3,...,8] will be truncated.\nThus, the \"sum(id)\" = 126.\nIf we use \"count(*)\" here, then the value will be 8.\nLet's consider the case that there are 8 values [1,2,3,...,8] by some\nkind of bug after running \"TRUNCATE ONLY tru_ftable_parent\".\nThen, we miss this bug by \"count(*)\" because the value is the same as\nthe correct pattern.\n\n> 1) Instead of switch case, a simple if else clause would reduce the code a bit:\n> if (behavior == DROP_RESTRICT)\n> appendStringInfoString(buf, \" RESTRICT\");\n> else if (behavior == DROP_CASCADE)\n> appendStringInfoString(buf, \" CASCADE\");\n\nI've modified it.\n\n\n> 2) Some coding style comments:\n> It's better to have a new line after variable declarations,\n> assignments, function calls, before if blocks, after if blocks for\n> better readability of the code.\n\nI've fixed it.\n\n> 3) I think we need truncatable behaviour that is consistent with updatable.\n\nIt's not correct. \"truncatable\" option works the same as \"updatable\".\nI've confirmed that the table can be truncated with the combination:\ntruncatable on the server setting is false & truncatable on the table\nsetting is true.\nI've also added to the test.\n\n> 4) GetConnection needs to be done after all the error checking code\n> otherwise on error we would have opened a new connection without\n> actually using it. Just move below code outside of the for loop in\n> postgresExecForeignTruncate\n\nSure, I've moved it.\n\n\n> 5) This assertion is bogus, because GetForeignServerIdByRelId will\n> return valid server id and otherwise it fails with \"cache lookup\n> error\", so please remove it.\n\nI've removed it.\n\n> 6) You can add a comment here saying this if-clause gets executed only\n> once per server.\n\nI've added it.\n\n\n> 7) Did you try checking whether we reach hash_destroy code when a\n> failure happens on executing truncate on a remote table? Otherwise we\n> might want to do routine->ExecForeignTruncate inside PG_TRY block?\n\nI've added PG_TRY block.\n\n\n> 8) This comment can be removed and have more descriptive comment\n> before the for loop in postgresExecForeignTruncate similar to the\n> comment what we have in postgresIsForeignRelUpdatable for updatable.\n\nI've removed the comment and copied the comment from\npostgresIsForeignRelUpdatable,\nand modified it.\n\n> 9) It will be good if you can divide up your patch into 3 separate\n> patches - 0001 code, 0002 tests, 0003 documentation\n\nI'll do it when I send a patch in the future, please forgive me on this patch.\n\n> 10) Why do we need many new tables for tests? Can't we do this -\n> insert, show count(*) as non-zero, truncate, show count(*) as 0, again\n> insert, another truncate test? And we can also have a very minimal\n> number of rows say 1 or 2 not 10? If possible, reduce the number of\n> new tables introduced. And do you have a specific reason to have a\n> text column in each of the tables? AFAICS, we don't care about the\n> column type, you could just have another int column and use\n> generate_series while inserting. We can remove md5 function calls.\n> Your tests will look clean.\n\nI've removed the text field but the number of records are kept.\nAs I say at the top, the value of id is checked so I want to keep the\nnumber of rows.\n\n2021年4月5日(月) 14:59 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Sun, Apr 4, 2021 at 10:23 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > Sure. I've replaced with the test command \"SELECT * FROM ...\" to\n> > \"SELECT COUNT(*) FROM ...\"\n> > However, for example, the \"id\" column is used to check after running\n> > TRUNCATE with ONLY clause to the inherited table.\n> > Thus, I use \"sum(id)\" instead of \"count(*)\" to check the result when\n> > the table has records.\n>\n> I still don't understand why we need sum(id), not count(*). Am I\n> missing something here?\n>\n> Here are some more comments on the v12 patch:\n> 1) Instead of switch case, a simple if else clause would reduce the code a bit:\n> if (behavior == DROP_RESTRICT)\n> appendStringInfoString(buf, \" RESTRICT\");\n> else if (behavior == DROP_CASCADE)\n> appendStringInfoString(buf, \" CASCADE\");\n>\n> 2) Some coding style comments:\n> It's better to have a new line after variable declarations,\n> assignments, function calls, before if blocks, after if blocks for\n> better readability of the code.\n> + appendStringInfoString(buf, \"TRUNCATE \"); ---> new line after this\n> + forboth (lc1, frels_list,\n>\n> + } ---> new line after this\n> + appendStringInfo(buf, \" %s IDENTITY\",\n>\n> + /* ensure the target foreign table is truncatable */\n> + truncatable = server_truncatable; ---> new line after this\n> + foreach (cell, ft->options)\n>\n> + } ---> new line after this\n> + if (!truncatable)\n>\n> + } ---> new line after this\n> + /* set up remote query */\n> + initStringInfo(&sql);\n> + deparseTruncateSql(&sql, frels_list, frels_extra, behavior,\n> restart_seqs); ---> new line after this\n> + /* run remote query */\n> + if (!PQsendQuery(conn, sql.data))\n> + pgfdw_report_error(ERROR, NULL, conn, false, sql.data);\n> ---> new line after this\n> + res = pgfdw_get_result(conn, sql.data); ---> new line after this\n> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> + pgfdw_report_error(ERROR, res, conn, true, sql.data); --->\n> new line after this\n> + /* clean-up */\n> + PQclear(res);\n> + pfree(sql.data);\n> +}\n>\n> and so on.\n>\n> a space after \"false,\" and before \"NULL\"\n> + conn = GetConnection(user, false,NULL);\n>\n> bring lc2, frels_extra to the same of lc1, frels_list\n> + forboth (lc1, frels_list,\n> + lc2, frels_extra)\n>\n> 3) I think we need truncatable behaviour that is consistent with updatable.\n> With your patch, seems like below is the behaviour for truncatable:\n> both server and foreign table are truncatable = truncated\n> server is not truncatable and foreign table is truncatable = not\n> truncated and error out\n> server is truncatable and foreign table is not truncatable = not\n> truncated and error out\n> server is not truncatable and foreign table is not truncatable = not\n> truncated and error out\n>\n> Below is the behaviour for updatable:\n> both server and foreign table are updatable = updated\n> server is not updatable and foreign table is updatable = updated\n> server is updatable and foreign table is not updatable = not updated\n> server is not updatable and foreign table is not updatable = not updated\n>\n> And also see comment in postgresIsForeignRelUpdatable\n> /*\n> * By default, all postgres_fdw foreign tables are assumed updatable. This\n> * can be overridden by a per-server setting, which in turn can be\n> * overridden by a per-table setting.\n> */\n>\n> IMO, you could do the same thing for truncatable, change is required\n> in your patch:\n> both server and foreign table are truncatable = truncated\n> server is not truncatable and foreign table is truncatable = truncated\n> server is truncatable and foreign table is not truncatable = not\n> truncated and error out\n> server is not truncatable and foreign table is not truncatable = not\n> truncated and error out\n>\n> 4) GetConnection needs to be done after all the error checking code\n> otherwise on error we would have opened a new connection without\n> actually using it. Just move below code outside of the for loop in\n> postgresExecForeignTruncate\n> + user = GetUserMapping(GetUserId(), server_id);\n> + conn = GetConnection(user, false,NULL);\n> to here:\n> + Assert (OidIsValid(server_id)));\n> +\n> + /* get a connection to the server */\n> + user = GetUserMapping(GetUserId(), server_id);\n> + conn = GetConnection(user, false, NULL);\n> +\n> + /* set up remote query */\n> + initStringInfo(&sql);\n> + deparseTruncateSql(&sql, frels_list, frels_extra, behavior, restart_seqs);\n> + /* run remote query */\n> + if (!PQsendQuery(conn, sql.data))\n> + pgfdw_report_error(ERROR, NULL, conn, false, sql.data);\n> + res = pgfdw_get_result(conn, sql.data);\n> + if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> + pgfdw_report_error(ERROR, res, conn, true, sql.data);\n>\n> 5) This assertion is bogus, because GetForeignServerIdByRelId will\n> return valid server id and otherwise it fails with \"cache lookup\n> error\", so please remove it.\n> + else\n> + {\n> + /* postgresExecForeignTruncate() is invoked for each server */\n> + Assert(server_id == GetForeignServerIdByRelId(frel_oid));\n> + }\n>\n> 6) You can add a comment here saying this if-clause gets executed only\n> once per server.\n> +\n> + if (!OidIsValid(server_id))\n> + {\n> + server_id = GetForeignServerIdByRelId(frel_oid);\n>\n> 7) Did you try checking whether we reach hash_destroy code when a\n> failure happens on executing truncate on a remote table? Otherwise we\n> might want to do routine->ExecForeignTruncate inside PG_TRY block?\n> + /* truncate_check_rel() has checked that already */\n> + Assert(routine->ExecForeignTruncate != NULL);\n> +\n> + routine->ExecForeignTruncate(ft_info->frels_list,\n> + ft_info->frels_extra,\n> + behavior,\n> + restart_seqs);\n> + }\n> +\n> + hash_destroy(ft_htab);\n> + }\n>\n> 8) This comment can be removed and have more descriptive comment\n> before the for loop in postgresExecForeignTruncate similar to the\n> comment what we have in postgresIsForeignRelUpdatable for updatable.\n> + /* pick up remote connection, and sanity checks */\n>\n> 9) It will be good if you can divide up your patch into 3 separate\n> patches - 0001 code, 0002 tests, 0003 documentation\n>\n> 10) Why do we need many new tables for tests? Can't we do this -\n> insert, show count(*) as non-zero, truncate, show count(*) as 0, again\n> insert, another truncate test? And we can also have a very minimal\n> number of rows say 1 or 2 not 10? If possible, reduce the number of\n> new tables introduced. And do you have a specific reason to have a\n> text column in each of the tables? AFAICS, we don't care about the\n> column type, you could just have another int column and use\n> generate_series while inserting. We can remove md5 function calls.\n> Your tests will look clean.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Apr 2021 23:08:08 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Mon, Apr 5, 2021 at 7:38 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > 3) I think we need truncatable behaviour that is consistent with updatable.\n>\n> It's not correct. \"truncatable\" option works the same as \"updatable\".\n> I've confirmed that the table can be truncated with the combination:\n> truncatable on the server setting is false & truncatable on the table\n> setting is true.\n> I've also added to the test.\n\nYeah you are right! I was wrong in understanding.\n\n> > 7) Did you try checking whether we reach hash_destroy code when a\n> > failure happens on executing truncate on a remote table? Otherwise we\n> > might want to do routine->ExecForeignTruncate inside PG_TRY block?\n>\n> I've added PG_TRY block.\n\nDid you check that hash_destroy is not reachable when an error occurs\non the remote server while executing truncate command? If yes, then\nonly we will have PG_TRY block, otherwise not.\n\n> > 9) It will be good if you can divide up your patch into 3 separate\n> > patches - 0001 code, 0002 tests, 0003 documentation\n>\n> I'll do it when I send a patch in the future, please forgive me on this patch.\n\nThat's okay.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Apr 2021 20:05:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "> Did you check that hash_destroy is not reachable when an error occurs\n> on the remote server while executing truncate command?\n\nI've checked it and hash_destroy doesn't work on error.\n\nI just adding elog() to check this:\n+ elog(NOTICE,\"destroyed\");\n+ hash_destroy(ft_htab);\n\nThen I've checked by the test.\n\n+ -- 'truncatable' option\n+ ALTER SERVER loopback OPTIONS (ADD truncatable 'false');\n+ TRUNCATE tru_ftable; -- error\n+ ERROR: truncate on \"tru_ftable\" is prohibited\n<- hash_destroy doesn't work.\n+ ALTER FOREIGN TABLE tru_ftable OPTIONS (ADD truncatable 'true');\n+ TRUNCATE tru_ftable; -- accepted\n+ NOTICE: destroyed <- hash_destroy works.\n\nOf course, the elog() is not included in v13 patch.\n\n2021年4月5日(月) 23:35 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Mon, Apr 5, 2021 at 7:38 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > > 3) I think we need truncatable behaviour that is consistent with updatable.\n> >\n> > It's not correct. \"truncatable\" option works the same as \"updatable\".\n> > I've confirmed that the table can be truncated with the combination:\n> > truncatable on the server setting is false & truncatable on the table\n> > setting is true.\n> > I've also added to the test.\n>\n> Yeah you are right! I was wrong in understanding.\n>\n> > > 7) Did you try checking whether we reach hash_destroy code when a\n> > > failure happens on executing truncate on a remote table? Otherwise we\n> > > might want to do routine->ExecForeignTruncate inside PG_TRY block?\n> >\n> > I've added PG_TRY block.\n>\n> Did you check that hash_destroy is not reachable when an error occurs\n> on the remote server while executing truncate command? If yes, then\n> only we will have PG_TRY block, otherwise not.\n>\n> > > 9) It will be good if you can divide up your patch into 3 separate\n> > > patches - 0001 code, 0002 tests, 0003 documentation\n> >\n> > I'll do it when I send a patch in the future, please forgive me on this patch.\n>\n> That's okay.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 00:17:22 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Mon, Apr 5, 2021 at 8:47 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> > Did you check that hash_destroy is not reachable when an error occurs\n> > on the remote server while executing truncate command?\n>\n> I've checked it and hash_destroy doesn't work on error.\n>\n> I just adding elog() to check this:\n> + elog(NOTICE,\"destroyed\");\n> + hash_destroy(ft_htab);\n>\n> Then I've checked by the test.\n>\n> + -- 'truncatable' option\n> + ALTER SERVER loopback OPTIONS (ADD truncatable 'false');\n> + TRUNCATE tru_ftable; -- error\n> + ERROR: truncate on \"tru_ftable\" is prohibited\n> <- hash_destroy doesn't work.\n> + ALTER FOREIGN TABLE tru_ftable OPTIONS (ADD truncatable 'true');\n> + TRUNCATE tru_ftable; -- accepted\n> + NOTICE: destroyed <- hash_destroy works.\n>\n> Of course, the elog() is not included in v13 patch.\n\nFew more comments on v13:\n\n1) Are we using all of these macros? I see that we are setting them\nbut we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\nthem?\n+#define TRUNCATE_REL_CONTEXT_NORMAL 0x01\n+#define TRUNCATE_REL_CONTEXT_ONLY 0x02\n+#define TRUNCATE_REL_CONTEXT_CASCADING 0x04\n\n2) Why is this change for? The existing comment properly says the\nbehaviour i.e. all foreign tables are updatable by default.\n@@ -2216,7 +2223,7 @@ postgresIsForeignRelUpdatable(Relation rel)\n ListCell *lc;\n\n /*\n- * By default, all postgres_fdw foreign tables are assumed updatable. This\n+ * By default, all postgres_fdw foreign tables are assumed NOT\ntruncatable. This\n\nAnd the below comment is wrong, by default foreign tables are assumed\ntruncatable.\n+ * By default, all postgres_fdw foreign tables are NOT assumed\ntruncatable. This\n+ * can be overridden by a per-server setting, which in turn can be\n+ * overridden by a per-table setting.\n+ */\n\n3) In the docs, let's not combine updatable and truncatable together.\nHave a separate section for <title>Truncatability Options</title>, all\nthe documentation related to it be under this new section.\n <para>\n By default all foreign tables using\n<filename>postgres_fdw</filename> are assumed\n- to be updatable. This may be overridden using the following option:\n+ to be updatable and truncatable. This may be overridden using\nthe following options:\n </para>\n\n4) I have a basic question: If I have a truncate statement with a mix\nof local and foreign tables, IIUC, the patch is dividing up a single\ntruncate statement into two truncate local tables, truncate foreign\ntables. Is this transaction safe at all?\nA better illustration: TRUNCATE local_rel1, local_rel2, local_rel3,\nforeign_rel1, foreign_rel2, foreign_rel3;\nYour patch executes TRUNCATE local_rel1, local_rel2, local_rel3; on\nlocal server and TRUNCATE foreign_rel1, foreign_rel2, foreign_rel3; on\nremote server. Am I right?\nNow the question is: if any failure occurs either in local server\nexecution or in remote server execution, the other truncate command\nwould succeed right? Isn't this non-transactional and we are breaking\nthe transactional guarantee of the truncation.\nLooks like the order of execution is - first local rel truncation and\nthen foreign rel truncation, so what happens if foreign rel truncation\nfails? Can we revert the local rel truncation?\n\n6) Again v13 has white space errors, please ensure to run git diff\n--check on every patch.\nbharath@ubuntu:~/workspace/postgres$ git apply\n/mnt/hgfs/Shared/pgsql14-truncate-on-foreign-table.v13.patch\n/mnt/hgfs/Shared/pgsql14-truncate-on-foreign-table.v13.patch:41:\ntrailing whitespace.\n/mnt/hgfs/Shared/pgsql14-truncate-on-foreign-table.v13.patch:47:\ntrailing whitespace.\n\nwarning: 2 lines add whitespace errors.\nbharath@ubuntu:~/workspace/postgres$ git diff --check\ncontrib/postgres_fdw/deparse.c:2200: trailing whitespace.\n+\ncontrib/postgres_fdw/deparse.c:2206: trailing whitespace.\n+\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 10:03:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Thank you for checking v13, and here is v14 patch.\n\n> 1) Are we using all of these macros? I see that we are setting them\n> but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n> them?\n\nThese may be needed for the foreign data handler other than postgres_fdw.\n\n> 2) Why is this change for? The existing comment properly says the\n> behaviour i.e. all foreign tables are updatable by default.\n\nThis is just a mistake. I've fixed it.\n\n> 3) In the docs, let's not combine updatable and truncatable together.\n> Have a separate section for <title>Truncatability Options</title>, all\n> the documentation related to it be under this new section.\n\nSure. I've added new section.\n\n> 4) I have a basic question: If I have a truncate statement with a mix\n> of local and foreign tables, IIUC, the patch is dividing up a single\n> truncate statement into two truncate local tables, truncate foreign\n> tables. Is this transaction safe at all?\n\nAccording to this discussion, we can revert both tables in the local\nand the server.\nhttps://www.postgresql.org/message-id/CAOP8fzbuJ5GdKa%2B%3DGtizbqFtO2xsQbn4mVjjzunmsNVJMChSMQ%40mail.gmail.com\n\n> 6) Again v13 has white space errors, please ensure to run git diff\n> --check on every patch.\n\nUmm.. I'm sure I've checked it on v13.\nI've confirmed it on v14.\n\n2021年4月6日(火) 13:33 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Mon, Apr 5, 2021 at 8:47 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> >\n> > > Did you check that hash_destroy is not reachable when an error occurs\n> > > on the remote server while executing truncate command?\n> >\n> > I've checked it and hash_destroy doesn't work on error.\n> >\n> > I just adding elog() to check this:\n> > + elog(NOTICE,\"destroyed\");\n> > + hash_destroy(ft_htab);\n> >\n> > Then I've checked by the test.\n> >\n> > + -- 'truncatable' option\n> > + ALTER SERVER loopback OPTIONS (ADD truncatable 'false');\n> > + TRUNCATE tru_ftable; -- error\n> > + ERROR: truncate on \"tru_ftable\" is prohibited\n> > <- hash_destroy doesn't work.\n> > + ALTER FOREIGN TABLE tru_ftable OPTIONS (ADD truncatable 'true');\n> > + TRUNCATE tru_ftable; -- accepted\n> > + NOTICE: destroyed <- hash_destroy works.\n> >\n> > Of course, the elog() is not included in v13 patch.\n>\n> Few more comments on v13:\n>\n> 1) Are we using all of these macros? I see that we are setting them\n> but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n> them?\n> +#define TRUNCATE_REL_CONTEXT_NORMAL 0x01\n> +#define TRUNCATE_REL_CONTEXT_ONLY 0x02\n> +#define TRUNCATE_REL_CONTEXT_CASCADING 0x04\n>\n> 2) Why is this change for? The existing comment properly says the\n> behaviour i.e. all foreign tables are updatable by default.\n> @@ -2216,7 +2223,7 @@ postgresIsForeignRelUpdatable(Relation rel)\n> ListCell *lc;\n>\n> /*\n> - * By default, all postgres_fdw foreign tables are assumed updatable. This\n> + * By default, all postgres_fdw foreign tables are assumed NOT\n> truncatable. This\n>\n> And the below comment is wrong, by default foreign tables are assumed\n> truncatable.\n> + * By default, all postgres_fdw foreign tables are NOT assumed\n> truncatable. This\n> + * can be overridden by a per-server setting, which in turn can be\n> + * overridden by a per-table setting.\n> + */\n>\n> 3) In the docs, let's not combine updatable and truncatable together.\n> Have a separate section for <title>Truncatability Options</title>, all\n> the documentation related to it be under this new section.\n> <para>\n> By default all foreign tables using\n> <filename>postgres_fdw</filename> are assumed\n> - to be updatable. This may be overridden using the following option:\n> + to be updatable and truncatable. This may be overridden using\n> the following options:\n> </para>\n>\n> 4) I have a basic question: If I have a truncate statement with a mix\n> of local and foreign tables, IIUC, the patch is dividing up a single\n> truncate statement into two truncate local tables, truncate foreign\n> tables. Is this transaction safe at all?\n> A better illustration: TRUNCATE local_rel1, local_rel2, local_rel3,\n> foreign_rel1, foreign_rel2, foreign_rel3;\n> Your patch executes TRUNCATE local_rel1, local_rel2, local_rel3; on\n> local server and TRUNCATE foreign_rel1, foreign_rel2, foreign_rel3; on\n> remote server. Am I right?\n> Now the question is: if any failure occurs either in local server\n> execution or in remote server execution, the other truncate command\n> would succeed right? Isn't this non-transactional and we are breaking\n> the transactional guarantee of the truncation.\n> Looks like the order of execution is - first local rel truncation and\n> then foreign rel truncation, so what happens if foreign rel truncation\n> fails? Can we revert the local rel truncation?\n>\n> 6) Again v13 has white space errors, please ensure to run git diff\n> --check on every patch.\n> bharath@ubuntu:~/workspace/postgres$ git apply\n> /mnt/hgfs/Shared/pgsql14-truncate-on-foreign-table.v13.patch\n> /mnt/hgfs/Shared/pgsql14-truncate-on-foreign-table.v13.patch:41:\n> trailing whitespace.\n> /mnt/hgfs/Shared/pgsql14-truncate-on-foreign-table.v13.patch:47:\n> trailing whitespace.\n>\n> warning: 2 lines add whitespace errors.\n> bharath@ubuntu:~/workspace/postgres$ git diff --check\n> contrib/postgres_fdw/deparse.c:2200: trailing whitespace.\n> +\n> contrib/postgres_fdw/deparse.c:2206: trailing whitespace.\n> +\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Apr 2021 21:06:33 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 6, 2021 at 5:36 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> Thank you for checking v13, and here is v14 patch.\n>\n> > 1) Are we using all of these macros? I see that we are setting them\n> > but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n> > them?\n>\n> These may be needed for the foreign data handler other than postgres_fdw.\n\nI'm not sure about this, but if it's discussed upthread and agreed\nupon, I'm fine with it.\n\n> > 4) I have a basic question: If I have a truncate statement with a mix\n> > of local and foreign tables, IIUC, the patch is dividing up a single\n> > truncate statement into two truncate local tables, truncate foreign\n> > tables. Is this transaction safe at all?\n>\n> According to this discussion, we can revert both tables in the local\n> and the server.\n> https://www.postgresql.org/message-id/CAOP8fzbuJ5GdKa%2B%3DGtizbqFtO2xsQbn4mVjjzunmsNVJMChSMQ%40mail.gmail.com\n\nOn giving more thought on this, it looks like we are safe i.e. local\ntruncation will get reverted. Because if an error occurs on foreign\ntable truncation, the control in the local server would go to\npgfdw_report_error which generates an error in the local server which\naborts the local transaction and so the local table truncations would\nget reverted.\n\n+ /* run remote query */\n+ if (!PQsendQuery(conn, sql.data))\n+ pgfdw_report_error(ERROR, NULL, conn, false, sql.data);\n+\n+ res = pgfdw_get_result(conn, sql.data);\n+\n+ if (PQresultStatus(res) != PGRES_COMMAND_OK)\n+ pgfdw_report_error(ERROR, res, conn, true, sql.data);\n\nI still feel that the above bunch of code is duplicate of what\ndo_sql_command function already has. I would recommend that we just\nmake that function non-static(it's easy to do) and keep the\ndeclaration in postgres_fdw.h and use it in the\npostgresExecForeignTruncate.\n\nAnother minor comment:\nWe could move + ForeignServer *serv = NULL; within foreach (lc,\nfrels_list), because it's not being used outside.\n\nThe v14 patch mostly looks good to me other than the above comments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 19:47:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 6, 2021 at 5:36 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n>\n> Thank you for checking v13, and here is v14 patch.\n\ncfbot failure on v14 - https://cirrus-ci.com/task/4772360931770368.\nLooks like it is not related to this patch, please re-confirm it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 19:55:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "I've attached v15.\n\n> I still feel that the above bunch of code is duplicate of what\n> do_sql_command function already has. I would recommend that we just\n> make that function non-static(it's easy to do) and keep the\n> declaration in postgres_fdw.h and use it in the\n> postgresExecForeignTruncate.\n\nI've tried this on v15.\n\n> Another minor comment:\n> We could move + ForeignServer *serv = NULL; within foreach (lc,\n> frels_list), because it's not being used outside.\n\nI've moved it.\n\n> cfbot failure on v14 - https://cirrus-ci.com/task/4772360931770368.\n> Looks like it is not related to this patch, please re-confirm it.\n\nI've checked v15 patch with \"make check-world\" and confirmed this passed.\n\n\n\n2021年4月6日(火) 23:25 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Tue, Apr 6, 2021 at 5:36 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> >\n> > Thank you for checking v13, and here is v14 patch.\n>\n> cfbot failure on v14 - https://cirrus-ci.com/task/4772360931770368.\n> Looks like it is not related to this patch, please re-confirm it.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 7 Apr 2021 01:44:53 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 6, 2021 at 10:15 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> I've checked v15 patch with \"make check-world\" and confirmed this passed.\n\nThanks for the patch. One minor thing - I think \"mixtured\" is not the\ncorrect word in \"+-- partition table mixtured by table and foreign\ntable\". How about something like \"+-- partitioned table with both\nlocal and foreign table as partitions\"?\n\nThe v15 patch basically looks good to me. I have no more comments.\n\nCF entry https://commitfest.postgresql.org/32/2972/ still says it's\n\"waiting on author\", do you want to change it to \"needs review\" if you\nhave no open points left so that others can take a look at it?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 06:45:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "> One minor thing - I think \"mixtured\" is not the\n> correct word in \"+-- partition table mixtured by table and foreign\n> table\". How about something like \"+-- partitioned table with both\n> local and foreign table as partitions\"?\n\nSure. I've fixed this.\n\n> The v15 patch basically looks good to me. I have no more comments.\nThank you for checking this many times.\n\n> CF entry https://commitfest.postgresql.org/32/2972/ still says it's\n> \"waiting on author\", do you want to change it to \"needs review\" if you\n> have no open points left so that others can take a look at it?\nYes, please.\n\n2021年4月7日(水) 10:15 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Tue, Apr 6, 2021 at 10:15 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > I've checked v15 patch with \"make check-world\" and confirmed this passed.\n>\n> Thanks for the patch. One minor thing - I think \"mixtured\" is not the\n> correct word in \"+-- partition table mixtured by table and foreign\n> table\". How about something like \"+-- partitioned table with both\n> local and foreign table as partitions\"?\n>\n> The v15 patch basically looks good to me. I have no more comments.\n>\n> CF entry https://commitfest.postgresql.org/32/2972/ still says it's\n> \"waiting on author\", do you want to change it to \"needs review\" if you\n> have no open points left so that others can take a look at it?\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 7 Apr 2021 12:30:00 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/06 21:06, Kazutaka Onishi wrote:\n> Thank you for checking v13, and here is v14 patch.\n> \n>> 1) Are we using all of these macros? I see that we are setting them\n>> but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n>> them?\n> \n> These may be needed for the foreign data handler other than postgres_fdw.\n\nCould you tell me how such FDWs use TRUNCATE_REL_CONTEXT_CASCADING and _NORMAL? I'm still not sure if TRUNCATE_REL_CONTEXT_CASCADING is really required.\n\nWith the patch, both inherited and referencing relations are marked as TRUNCATE_REL_CONTEXT_CASCADING? Is this ok for that use? Or we should distinguish them?\n\n+#define TRUNCATE_REL_CONTEXT_NORMAL 0x01\n+#define TRUNCATE_REL_CONTEXT_ONLY 0x02\n+#define TRUNCATE_REL_CONTEXT_CASCADING 0x04\n\nWith the patch, these are defined as flag bits. But ExecuteTruncate() seems to always set the entry in relids_extra to either of them, not the combination of them. So we can define them as enum?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 8 Apr 2021 04:19:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月8日(木) 4:19 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/06 21:06, Kazutaka Onishi wrote:\n> > Thank you for checking v13, and here is v14 patch.\n> >\n> >> 1) Are we using all of these macros? I see that we are setting them\n> >> but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n> >> them?\n> >\n> > These may be needed for the foreign data handler other than postgres_fdw.\n>\n> Could you tell me how such FDWs use TRUNCATE_REL_CONTEXT_CASCADING and _NORMAL? I'm still not sure if TRUNCATE_REL_CONTEXT_CASCADING is really required.\n>\nhttps://www.postgresql.org/message-id/20200102144644.GM3195%40tamriel.snowman.net\n\nThis is the suggestion when I added the flag to inform cascading.\n\n| .... Instead, I'd suggest we have the core code build\n| up a list of tables to truncate, for each server, based just on the list\n| passed in by the user, and then also pass in if CASCADE was included or\n| not, and then let the FDW handle that in whatever way makes sense for\n| the foreign server (which, for a PG system, would probably be just\n| building up the TRUNCATE command and running it with or without the\n| CASCADE option, but it might be different on other systems).\n|\nIndeed, it is not a strong technical reason at this moment.\n(And, I also don't have idea to distinct these differences in my module also.)\n\n> With the patch, both inherited and referencing relations are marked as TRUNCATE_REL_CONTEXT_CASCADING? Is this ok for that use? Or we should distinguish them?\n>\nIn addition, even though my prior implementation distinguished and deliver\nthe status whether the truncate command is issued with NORMAL or ONLY,\ndoes the remote query by postgres_fdw needs to follow the manner?\n\nPlease assume the case when a foreign-table \"ft\" that maps a remote table\nwith some child-relations.\nIf we run TRUNCATE ONLY ft at the local server, postgres_fdw setup\na remote truncate command with \"ONLY\" qualifier, then remote postgresql\nserver truncate only parent table of the remote side.\nNext, \"SELECT * FROM ft\" command returns some valid rows from the\nchild tables in the remote side, even if it is just after TRUNCATE command.\nIs it a intuitive behavior for users?\n\nEven though we have discussed about the flags and expected behavior of\nforeign truncate, strip of the relids_extra may be the most straight-forward\nAPI design.\nSo, in other words, the API requires FDW driver to make the entire data\nrepresented by the foreign table empty, by ExecForeignTruncate().\nIt is probably more consistent to look at DropBehavior for listing-up the\ntarget relations at the local relations only.\n\nHow about your thought?\n\nIf we stand on the above design, ExecForeignTruncate() don't needs\nfrels_extra and behavior arguments.\n\n> +#define TRUNCATE_REL_CONTEXT_NORMAL 0x01\n> +#define TRUNCATE_REL_CONTEXT_ONLY 0x02\n> +#define TRUNCATE_REL_CONTEXT_CASCADING 0x04\n>\n> With the patch, these are defined as flag bits. But ExecuteTruncate() seems to always set the entry in relids_extra to either of them, not the combination of them. So we can define them as enum?\n>\nRegardless of my above comment, It's a bug.\nWhen list_member_oid(relids, myrelid) == true, we have to set proper flag on the\nrelevant frels_extra member, not just ignoring.\n\nBest regards,\n\n\nBest regards,\n--\nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 8 Apr 2021 10:56:08 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/08 10:56, Kohei KaiGai wrote:\n> 2021年4月8日(木) 4:19 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>>\n>> On 2021/04/06 21:06, Kazutaka Onishi wrote:\n>>> Thank you for checking v13, and here is v14 patch.\n>>>\n>>>> 1) Are we using all of these macros? I see that we are setting them\n>>>> but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n>>>> them?\n>>>\n>>> These may be needed for the foreign data handler other than postgres_fdw.\n>>\n>> Could you tell me how such FDWs use TRUNCATE_REL_CONTEXT_CASCADING and _NORMAL? I'm still not sure if TRUNCATE_REL_CONTEXT_CASCADING is really required.\n>>\n> https://www.postgresql.org/message-id/20200102144644.GM3195%40tamriel.snowman.net\n> \n> This is the suggestion when I added the flag to inform cascading.\n> \n> | .... Instead, I'd suggest we have the core code build\n> | up a list of tables to truncate, for each server, based just on the list\n> | passed in by the user, and then also pass in if CASCADE was included or\n> | not, and then let the FDW handle that in whatever way makes sense for\n> | the foreign server (which, for a PG system, would probably be just\n> | building up the TRUNCATE command and running it with or without the\n> | CASCADE option, but it might be different on other systems).\n> |\n> Indeed, it is not a strong technical reason at this moment.\n> (And, I also don't have idea to distinct these differences in my module also.)\n\nCASCADE option mentioned in the above seems the CASCADE clause specified in TRUNCATE command. No? So the above doesn't seem to suggest to include the information about how each table to truncate is picked up. Am I missing something?\n\n\n> \n>> With the patch, both inherited and referencing relations are marked as TRUNCATE_REL_CONTEXT_CASCADING? Is this ok for that use? Or we should distinguish them?\n>>\n> In addition, even though my prior implementation distinguished and deliver\n> the status whether the truncate command is issued with NORMAL or ONLY,\n> does the remote query by postgres_fdw needs to follow the manner?\n> \n> Please assume the case when a foreign-table \"ft\" that maps a remote table\n> with some child-relations.\n> If we run TRUNCATE ONLY ft at the local server, postgres_fdw setup\n> a remote truncate command with \"ONLY\" qualifier, then remote postgresql\n> server truncate only parent table of the remote side.\n> Next, \"SELECT * FROM ft\" command returns some valid rows from the\n> child tables in the remote side, even if it is just after TRUNCATE command.\n> Is it a intuitive behavior for users?\n\nYes, because that's the same behavior as for the local tables. No?\n\nIf this understanding is true, the following note that the patch added is also intuitive, and not necessary? At least \"partition leafs\" part should be removed because TRUNCATE ONLY fails if the remote table is a partitioned table.\n\n+ Pay attention for the case when a foreign table maps remote table\n+ that has inherited children or partition leafs.\n+ <command>TRUNCATE</command> specifies the foreign tables with\n+ <literal>ONLY</literal> clause, remove queries over the\n+ <filename>postgres_fdw</filename> also specify remote tables with\n+ <literal>ONLY</literal> clause, that will truncate only parent\n+ portion of the remote table. In the results, it looks like\n+ <command>TRUNCATE</command> command partially eliminated contents\n+ of the foreign tables.\n\n\n> \n> Even though we have discussed about the flags and expected behavior of\n> foreign truncate, strip of the relids_extra may be the most straight-forward\n> API design.\n> So, in other words, the API requires FDW driver to make the entire data\n> represented by the foreign table empty, by ExecForeignTruncate().\n> It is probably more consistent to look at DropBehavior for listing-up the\n> target relations at the local relations only.\n> \n> How about your thought?\n\nI was thinking to remove only TRUNCATE_REL_CONTEXT_CASCADING if that's really not necessary. That is, rels_extra is still used to indicate whether each table is specified with ONLY option or not. To do this, we can use _NORMAL and _ONLY. Or we can also make that as the list of boolean flag (indicating whether ONLY is specified or not).\n\n\n> \n> If we stand on the above design, ExecForeignTruncate() don't needs\n> frels_extra and behavior arguments.\n> \n>> +#define TRUNCATE_REL_CONTEXT_NORMAL 0x01\n>> +#define TRUNCATE_REL_CONTEXT_ONLY 0x02\n>> +#define TRUNCATE_REL_CONTEXT_CASCADING 0x04\n>>\n>> With the patch, these are defined as flag bits. But ExecuteTruncate() seems to always set the entry in relids_extra to either of them, not the combination of them. So we can define them as enum?\n>>\n> Regardless of my above comment, It's a bug.\n> When list_member_oid(relids, myrelid) == true, we have to set proper flag on the\n> relevant frels_extra member, not just ignoring.\n\nOne concern about this is that local tables are not processed that way. For local tables, the information (whether ONLY is specified or not) of the table found first is used. For example, when we execute \"TRUNCATE ONLY tbl, tbl\" and \"TRUNCATE tbl, ONLY tbl\", the former truncates only parent table because \"ONLY tbl\" is found first. But the latter truncates the parent and all inherited tables because \"tbl\" is found first.\n\nIf even foreign table follows this manner, current patch's logic seems right.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 8 Apr 2021 11:44:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月8日(木) 11:44 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/08 10:56, Kohei KaiGai wrote:\n> > 2021年4月8日(木) 4:19 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >>\n> >> On 2021/04/06 21:06, Kazutaka Onishi wrote:\n> >>> Thank you for checking v13, and here is v14 patch.\n> >>>\n> >>>> 1) Are we using all of these macros? I see that we are setting them\n> >>>> but we only use TRUNCATE_REL_CONTEXT_ONLY. If not used, can we remove\n> >>>> them?\n> >>>\n> >>> These may be needed for the foreign data handler other than postgres_fdw.\n> >>\n> >> Could you tell me how such FDWs use TRUNCATE_REL_CONTEXT_CASCADING and _NORMAL? I'm still not sure if TRUNCATE_REL_CONTEXT_CASCADING is really required.\n> >>\n> > https://www.postgresql.org/message-id/20200102144644.GM3195%40tamriel.snowman.net\n> >\n> > This is the suggestion when I added the flag to inform cascading.\n> >\n> > | .... Instead, I'd suggest we have the core code build\n> > | up a list of tables to truncate, for each server, based just on the list\n> > | passed in by the user, and then also pass in if CASCADE was included or\n> > | not, and then let the FDW handle that in whatever way makes sense for\n> > | the foreign server (which, for a PG system, would probably be just\n> > | building up the TRUNCATE command and running it with or without the\n> > | CASCADE option, but it might be different on other systems).\n> > |\n> > Indeed, it is not a strong technical reason at this moment.\n> > (And, I also don't have idea to distinct these differences in my module also.)\n>\n> CASCADE option mentioned in the above seems the CASCADE clause specified in TRUNCATE command. No? So the above doesn't seem to suggest to include the information about how each table to truncate is picked up. Am I missing something?\n>\nIt might be a bit different context.\n\n> >\n> >> With the patch, both inherited and referencing relations are marked as TRUNCATE_REL_CONTEXT_CASCADING? Is this ok for that use? Or we should distinguish them?\n> >>\n> > In addition, even though my prior implementation distinguished and deliver\n> > the status whether the truncate command is issued with NORMAL or ONLY,\n> > does the remote query by postgres_fdw needs to follow the manner?\n> >\n> > Please assume the case when a foreign-table \"ft\" that maps a remote table\n> > with some child-relations.\n> > If we run TRUNCATE ONLY ft at the local server, postgres_fdw setup\n> > a remote truncate command with \"ONLY\" qualifier, then remote postgresql\n> > server truncate only parent table of the remote side.\n> > Next, \"SELECT * FROM ft\" command returns some valid rows from the\n> > child tables in the remote side, even if it is just after TRUNCATE command.\n> > Is it a intuitive behavior for users?\n>\n> Yes, because that's the same behavior as for the local tables. No?\n>\nNo. ;-p\n\nWhen we define a foreign-table as follows,\n\npostgres=# CREATE FOREIGN TABLE ft (id int, v text)\n SERVER loopback OPTIONS (table_name 't_parent',\ntruncatable 'true');\npostgres=# select * from ft;\n id | v\n----+-------------------\n 1 | 1 in the parent\n 2 | 2 in the parent\n 3 | 3 in the parent\n 4 | 4 in the parent\n 11 | 11 in the child_1\n 12 | 12 in the child_1\n 13 | 13 in the child_1\n 21 | 21 in the child_2\n 22 | 22 in the child_2\n 23 | 23 in the child_2\n(10 rows)\n\nTRUNCATE ONLY eliminates the rows come from parent table on the remote side,\neven though this foreign table has no parent-child relationship in the\nlocal side.\n\npostgres=# begin;\nBEGIN\npostgres=# truncate only ft;\nTRUNCATE TABLE\npostgres=# select * from ft;\n id | v\n----+-------------------\n 11 | 11 in the child_1\n 12 | 12 in the child_1\n 13 | 13 in the child_1\n 21 | 21 in the child_2\n 22 | 22 in the child_2\n 23 | 23 in the child_2\n(6 rows)\n\npostgres=# abort;\nROLLBACK\n\nIn case when a local table (with no children) has same contents,\nTRUNCATE command\nwitll remove the entire table contents.\n\npostgres=# select * INTO tt FROM ft;\nSELECT 10\npostgres=# select * from tt;\n id | v\n----+-------------------\n 1 | 1 in the parent\n 2 | 2 in the parent\n 3 | 3 in the parent\n 4 | 4 in the parent\n 11 | 11 in the child_1\n 12 | 12 in the child_1\n 13 | 13 in the child_1\n 21 | 21 in the child_2\n 22 | 22 in the child_2\n 23 | 23 in the child_2\n(10 rows)\n\npostgres=# truncate only tt;\nTRUNCATE TABLE\npostgres=# select * from tt;\n id | v\n----+---\n(0 rows)\n\n> If this understanding is true, the following note that the patch added is also intuitive, and not necessary? At least \"partition leafs\" part should be removed because TRUNCATE ONLY fails if the remote table is a partitioned table.\n>\n> + Pay attention for the case when a foreign table maps remote table\n> + that has inherited children or partition leafs.\n> + <command>TRUNCATE</command> specifies the foreign tables with\n> + <literal>ONLY</literal> clause, remove queries over the\n> + <filename>postgres_fdw</filename> also specify remote tables with\n> + <literal>ONLY</literal> clause, that will truncate only parent\n> + portion of the remote table. In the results, it looks like\n> + <command>TRUNCATE</command> command partially eliminated contents\n> + of the foreign tables.\n>\nBase on the above assumption, I don't think it should be a part of\ndocumentation.\nOn the other hands, we need to describe this API requires FDW driver to wipe out\nthe entire data on behalf of the foreign tables once they are picked up by the\nExecuteTruncate().\n\n> > Even though we have discussed about the flags and expected behavior of\n> > foreign truncate, strip of the relids_extra may be the most straight-forward\n> > API design.\n> > So, in other words, the API requires FDW driver to make the entire data\n> > represented by the foreign table empty, by ExecForeignTruncate().\n> > It is probably more consistent to look at DropBehavior for listing-up the\n> > target relations at the local relations only.\n> >\n> > How about your thought?\n>\n> I was thinking to remove only TRUNCATE_REL_CONTEXT_CASCADING if that's really not necessary. That is, rels_extra is still used to indicate whether each table is specified with ONLY option or not. To do this, we can use _NORMAL and _ONLY. Or we can also make that as the list of boolean flag (indicating whether ONLY is specified or not).\n>\nI'm inclined to eliminate relids_extra list itself, because FDW\ndrivers don't need to\ndistinguish the CASCADING, NORMAL or ONLY cases.\nThe ExecForeignTruncate receives a list of foreign tables that is already\nexpanded by the ExecuteTruncate(), thus, all the FDW driver shall do is\njust wipe out entire data mapped to the individual foreign tables\nAlso, FDW driver don't need to know DropBehavior.\n\n> > If we stand on the above design, ExecForeignTruncate() don't needs\n> > frels_extra and behavior arguments.\n> >\n> >> +#define TRUNCATE_REL_CONTEXT_NORMAL 0x01\n> >> +#define TRUNCATE_REL_CONTEXT_ONLY 0x02\n> >> +#define TRUNCATE_REL_CONTEXT_CASCADING 0x04\n> >>\n> >> With the patch, these are defined as flag bits. But ExecuteTruncate() seems to always set the entry in relids_extra to either of them, not the combination of them. So we can define them as enum?\n> >>\n> > Regardless of my above comment, It's a bug.\n> > When list_member_oid(relids, myrelid) == true, we have to set proper flag on the\n> > relevant frels_extra member, not just ignoring.\n>\n> One concern about this is that local tables are not processed that way. For local tables, the information (whether ONLY is specified or not) of the table found first is used. For example, when we execute \"TRUNCATE ONLY tbl, tbl\" and \"TRUNCATE tbl, ONLY tbl\", the former truncates only parent table because \"ONLY tbl\" is found first. But the latter truncates the parent and all inherited tables because \"tbl\" is found first.\n>\n> If even foreign table follows this manner, current patch's logic seems right.\n>\n-1. :-(\nIt should be fixed, even if we try to deliver the relids_extra list.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:43:25 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/08 13:43, Kohei KaiGai wrote:\n> In case when a local table (with no children) has same contents,\n> TRUNCATE command\n> witll remove the entire table contents.\n\nBut if there are local child tables that inherit the local parent table, and TRUNCATE ONLY <parent table> is executed, only the contents in the parent will be truncated. I was thinking that this behavior should be applied to the foreign table whose remote (parent) table have remote child tables.\n\nSo what we need to reach the consensus is; how far ONLY option affects. Please imagine the case where we have\n\n(1) local parent table, also foreign table of remote parent table\n(2) local child table, inherits local parent table\n(3) remote parent table\n(4) remote child table, inherits remote parent table\n\nI think that we agree all (1), (2), (3) and (4) should be truncated if local parent table (1) is specified without ONLY in TRUNCATE command. OTOH, if ONLY is specified, we agree that at least local child table (2) should NOT be truncated.\n\nSo the remaining point is; remote tables (3) and (4) should be truncated or not when ONLY is specified? You seem to argue that both should be truncated by removing extra list. I was thinking that only remote parent table (3) should be truncated. That is, IMO we should treat the truncation on foreign table as the same as that on its forein data source.\n\nOther people might think neither (3) nor (4) should be truncated in that case because ONLY should affect only the table directly specified in TRUNCATE command, i.e., local parent table (1). For now this also looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 8 Apr 2021 15:04:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月8日(木) 15:04 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/08 13:43, Kohei KaiGai wrote:\n> > In case when a local table (with no children) has same contents,\n> > TRUNCATE command\n> > witll remove the entire table contents.\n>\n> But if there are local child tables that inherit the local parent table, and TRUNCATE ONLY <parent table> is executed, only the contents in the parent will be truncated. I was thinking that this behavior should be applied to the foreign table whose remote (parent) table have remote child tables.\n>\n> So what we need to reach the consensus is; how far ONLY option affects. Please imagine the case where we have\n>\n> (1) local parent table, also foreign table of remote parent table\n> (2) local child table, inherits local parent table\n> (3) remote parent table\n> (4) remote child table, inherits remote parent table\n>\n> I think that we agree all (1), (2), (3) and (4) should be truncated if local parent table (1) is specified without ONLY in TRUNCATE command. OTOH, if ONLY is specified, we agree that at least local child table (2) should NOT be truncated.\n>\nMy understanding of a foreign table is a representation of external\ndata, including remote RDBMS but not only RDBMS,\nregardless of the parent-child relationship at the local side.\nSo, once a local foreign table wraps entire tables tree (a parent and\nrelevant children) at the remote side, at least, it shall\nbe considered as a unified data chunk from the standpoint of the local side.\n\nPlease assume if file_fdw could map 3 different CSV files, then\ntruncate on the foreign table may eliminate just 1 of 3 files.\nIs it an expected / preferable behavior?\nBasically, we don't assume any charasteristics of the data on behalf\nof the FDW driver, even if it is PostgreSQL server.\nThus, I think the new API will expect to eliminate the entire rows on\nbehalf of the foreign table, regardless of the ONLY-clause,\nbecause it already controls which foreign-tables shall be picked up,\nbut does not control which part of the foreign table\nshall be eliminated.\n\n> So the remaining point is; remote tables (3) and (4) should be truncated or not when ONLY is specified? You seem to argue that both should be truncated by removing extra list. I was thinking that only remote parent table (3) should be truncated. That is, IMO we should treat the truncation on foreign table as the same as that on its forein data source.\n>\n> Other people might think neither (3) nor (4) should be truncated in that case because ONLY should affect only the table directly specified in TRUNCATE command, i.e., local parent table (1). For now this also looks good to me.\n>\nIn case when the local foreign table is a parent, the entire remote\ntable shall be truncated, if ONLY is given.\nIn case when the local foreign table is a child, nothing shall be\nhappen (API is not called), if ONLY is given.\n\nIMO, it is stable and simple definition, even if FDW driver wraps\nnon-RDBMS data source that has no idea\nof table inheritance.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 8 Apr 2021 15:48:15 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/08 15:48, Kohei KaiGai wrote:\n> 2021年4月8日(木) 15:04 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>>\n>> On 2021/04/08 13:43, Kohei KaiGai wrote:\n>>> In case when a local table (with no children) has same contents,\n>>> TRUNCATE command\n>>> witll remove the entire table contents.\n>>\n>> But if there are local child tables that inherit the local parent table, and TRUNCATE ONLY <parent table> is executed, only the contents in the parent will be truncated. I was thinking that this behavior should be applied to the foreign table whose remote (parent) table have remote child tables.\n>>\n>> So what we need to reach the consensus is; how far ONLY option affects. Please imagine the case where we have\n>>\n>> (1) local parent table, also foreign table of remote parent table\n>> (2) local child table, inherits local parent table\n>> (3) remote parent table\n>> (4) remote child table, inherits remote parent table\n>>\n>> I think that we agree all (1), (2), (3) and (4) should be truncated if local parent table (1) is specified without ONLY in TRUNCATE command. OTOH, if ONLY is specified, we agree that at least local child table (2) should NOT be truncated.\n>>\n> My understanding of a foreign table is a representation of external\n> data, including remote RDBMS but not only RDBMS,\n> regardless of the parent-child relationship at the local side.\n> So, once a local foreign table wraps entire tables tree (a parent and\n> relevant children) at the remote side, at least, it shall\n> be considered as a unified data chunk from the standpoint of the local side.\n\nAt least for me it's not intuitive to truncate the remote table and its all dependent tables even though users explicitly specify ONLY for the foreign table. As far as I read the past discussion, some people was thinking the same.\n\n> \n> Please assume if file_fdw could map 3 different CSV files, then\n> truncate on the foreign table may eliminate just 1 of 3 files.\n> Is it an expected / preferable behavior?\n\nI think that's up to each FDW. That is, IMO the information about whether ONLY is specified or not for each table should be passed to FDW. Then FDW itself should determine how to handle that information.\n\nAnyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 8 Apr 2021 18:25:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/08 18:25, Fujii Masao wrote:\n> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n\nThe patch failed to be applied because of recent commit.\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 8 Apr 2021 18:48:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月8日(木) 18:25 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/08 15:48, Kohei KaiGai wrote:\n> > 2021年4月8日(木) 15:04 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >>\n> >> On 2021/04/08 13:43, Kohei KaiGai wrote:\n> >>> In case when a local table (with no children) has same contents,\n> >>> TRUNCATE command\n> >>> witll remove the entire table contents.\n> >>\n> >> But if there are local child tables that inherit the local parent table, and TRUNCATE ONLY <parent table> is executed, only the contents in the parent will be truncated. I was thinking that this behavior should be applied to the foreign table whose remote (parent) table have remote child tables.\n> >>\n> >> So what we need to reach the consensus is; how far ONLY option affects. Please imagine the case where we have\n> >>\n> >> (1) local parent table, also foreign table of remote parent table\n> >> (2) local child table, inherits local parent table\n> >> (3) remote parent table\n> >> (4) remote child table, inherits remote parent table\n> >>\n> >> I think that we agree all (1), (2), (3) and (4) should be truncated if local parent table (1) is specified without ONLY in TRUNCATE command. OTOH, if ONLY is specified, we agree that at least local child table (2) should NOT be truncated.\n> >>\n> > My understanding of a foreign table is a representation of external\n> > data, including remote RDBMS but not only RDBMS,\n> > regardless of the parent-child relationship at the local side.\n> > So, once a local foreign table wraps entire tables tree (a parent and\n> > relevant children) at the remote side, at least, it shall\n> > be considered as a unified data chunk from the standpoint of the local side.\n>\n> At least for me it's not intuitive to truncate the remote table and its all dependent tables even though users explicitly specify ONLY for the foreign table. As far as I read the past discussion, some people was thinking the same.\n>\n> >\n> > Please assume if file_fdw could map 3 different CSV files, then\n> > truncate on the foreign table may eliminate just 1 of 3 files.\n> > Is it an expected / preferable behavior?\n>\n> I think that's up to each FDW. That is, IMO the information about whether ONLY is specified or not for each table should be passed to FDW. Then FDW itself should determine how to handle that information.\n>\n> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n>\nOk, it's fair enought for me.\n\nI'll try to sort out my thought, then raise a follow-up discussion if necessary.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Thu, 8 Apr 2021 22:02:08 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/08 22:02, Kohei KaiGai wrote:\n>> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n\nPushed! Thank all involved in this development!!\nFor record, I attached the final patch I committed.\n\n\n> Ok, it's fair enought for me.\n> \n> I'll try to sort out my thought, then raise a follow-up discussion if necessary.\n\nThanks!\n\nThe followings are the open items and discussion points that I'm thinking of.\n\n1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n\n2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n\n3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n\n4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 8 Apr 2021 22:14:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "Fujii-san,\n\n> >> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n> Pushed! Thank all involved in this development!!\n> For record, I attached the final patch I committed.\n\nThank you for revising the v16 patch to v18 and pushing it.\nCool!\n\n2021年4月8日(木) 22:14 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n>\n>\n> On 2021/04/08 22:02, Kohei KaiGai wrote:\n> >> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n>\n> Pushed! Thank all involved in this development!!\n> For record, I attached the final patch I committed.\n>\n>\n> > Ok, it's fair enought for me.\n> >\n> > I'll try to sort out my thought, then raise a follow-up discussion if necessary.\n>\n> Thanks!\n>\n> The followings are the open items and discussion points that I'm thinking of.\n>\n> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n>\n> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n>\n> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n>\n> 4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 9 Apr 2021 00:25:58 +0900", "msg_from": "Kazutaka Onishi <onishi@heterodb.com>", "msg_from_op": true, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 8, 2021 at 6:44 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> The followings are the open items and discussion points that I'm thinking of.\n>\n> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n\nI think we should remove the unused enums/macros, instead we could\nmention a note of the extensibility of those enums/macros in the\ncomments section around the enum/macro definitions.\n\n> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n\nIMO, the foreign truncate command should be constructed by collecting\nall the information i.e. \"TRUNCATE ft, ONLY ft\" and let the remote\nserver execute how it wants to execute. That will be consistent and no\nextra logic is required to track the already seen foreign tables while\nforeign table collection/foreign truncate command is being prepared on\nthe local server.\n\nI was thinking that the postgres throws error or warning for commands\nsuch as truncate, vaccum, analyze when the same tables are specified,\nbut seems like that's not what it does.\n\n> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n\nI'm okay with the behaviour as it is consistent with what ONLY does to\nlocal tables. Documenting this behaviour(if not done already) is a\nbetter way I think.\n\n> 4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n\nIt will be good to have.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Apr 2021 07:17:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 8, 2021 at 6:47 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Apr 8, 2021 at 6:44 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> > The followings are the open items and discussion points that I'm\n> thinking of.\n> >\n> > 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL,\n> TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a\n> foreign table was specified as the target to truncate in TRUNCATE command\n> is collected and passed to FDW. Does this really need to be passed to FDW?\n> Seems Stephen, Michael and I think that's necessary. But Kaigai-san does\n> not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed\n> because there seems no use case for that maybe.\n>\n> I think we should remove the unused enums/macros, instead we could\n> mention a note of the extensibility of those enums/macros in the\n> comments section around the enum/macro definitions.\n>\n> > 2. Currently when the same foreign table is specified multiple times in\n> the command, the extra information only for the foreign table found first\n> is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed,\n> TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\"\n> is found first. Is this OK? Or we should collect all, e.g., both _NORMAL\n> and _ONLY should be collected in that example? I think that the current\n> approach (i.e., collect the extra info about table found first if the same\n> table is specified multiple times) is good because even local tables are\n> also treated the same way. But Kaigai-san does not.\n>\n> IMO, the foreign truncate command should be constructed by collecting\n> all the information i.e. \"TRUNCATE ft, ONLY ft\" and let the remote\n> server execute how it wants to execute. That will be consistent and no\n> extra logic is required to track the already seen foreign tables while\n> foreign table collection/foreign truncate command is being prepared on\n> the local server.\n>\n> I was thinking that the postgres throws error or warning for commands\n> such as truncate, vaccum, analyze when the same tables are specified,\n> but seems like that's not what it does.\n>\n> > 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that\n> it constructs. That is, if the foreign table is specified with ONLY,\n> postgres_fdw also issues the TRUNCATE command for the corresponding remote\n> table with ONLY to the remote server. Then only root table is truncated in\n> remote server side, and the tables inheriting that are not truncated. Is\n> this behavior desirable? Seems Michael and I think this behavior is OK. But\n> Kaigai-san does not.\n>\n> I'm okay with the behaviour as it is consistent with what ONLY does to\n> local tables. Documenting this behaviour(if not done already) is a\n> better way I think.\n>\n> > 4. Tab-completion for TRUNCATE should be updated so that also foreign\n> tables are displayed.\n>\n> It will be good to have.\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\nw.r.t. point #1:\nbq. I think we should remove the unused enums/macros,\n\nI agree. When there is more concrete use case which requires new enum, we\ncan add enum whose meaning would be clearer.\n\nCheers\n\nOn Thu, Apr 8, 2021 at 6:47 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Apr 8, 2021 at 6:44 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> The followings are the open items and discussion points that I'm thinking of.\n>\n> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n\nI think we should remove the unused enums/macros, instead we could\nmention a note of the extensibility of those enums/macros in the\ncomments section around the enum/macro definitions.\n\n> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n\nIMO, the foreign truncate command should be constructed by collecting\nall the information i.e. \"TRUNCATE ft, ONLY ft\" and let the remote\nserver execute how it wants to execute. That will be consistent and no\nextra logic is required to track the already seen foreign tables while\nforeign table collection/foreign truncate command is being prepared on\nthe local server.\n\nI was thinking that the postgres throws error or warning for commands\nsuch as truncate, vaccum, analyze when the same tables are specified,\nbut seems like that's not what it does.\n\n> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n\nI'm okay with the behaviour as it is consistent with what ONLY does to\nlocal tables. Documenting this behaviour(if not done already) is a\nbetter way I think.\n\n> 4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n\nIt will be good to have.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.comw.r.t. point #1:bq. I think we should remove the unused enums/macros,I agree. When there is more concrete use case which requires new enum, we can add enum whose meaning would be clearer.Cheers", "msg_date": "Thu, 8 Apr 2021 19:05:53 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月8日(木) 22:14 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/08 22:02, Kohei KaiGai wrote:\n> >> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n>\n> Pushed! Thank all involved in this development!!\n> For record, I attached the final patch I committed.\n>\n>\n> > Ok, it's fair enought for me.\n> >\n> > I'll try to sort out my thought, then raise a follow-up discussion if necessary.\n>\n> Thanks!\n>\n> The followings are the open items and discussion points that I'm thinking of.\n>\n> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n>\n> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n>\n> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n>\nPrior to the discussion of 1-3, I like to clarify the role of foreign-tables.\n(Likely, it will lead a natural conclusion for the above open items.)\n\nAs literal of SQL/MED (Management of External Data), a foreign table\nis a representation of external data in PostgreSQL.\nIt allows to read and (optionally) write the external data wrapped by\nFDW drivers, as if we usually read / write heap tables.\nBy the FDW-APIs, the core PostgreSQL does not care about the\nstructure, location, volume and other characteristics of\nthe external data itself. It expects FDW-APIs invocation will perform\nas if we access a regular heap table.\n\nOn the other hands, we can say local tables are representation of\n\"internal\" data in PostgreSQL.\nA heap table is consists of one or more files (per BLCKSZ *\nRELSEG_SIZE), and table-am intermediates\nthe on-disk data to/from on-memory structure (TupleTableSlot).\nHere are no big differences in the concept. Ok?\n\nAs you know, ONLY clause controls whether TRUNCATE command shall run\non child-tables also, not only the parent.\nIf \"ONLY parent_table\" is given, its child tables are not picked up by\nExecuteTruncate(), unless child tables are not\nlisted up individually.\nThen, once ExecuteTruncate() picked up the relations, it makes the\nrelations empty using table-am\n(relation_set_new_filenode), and the callee\n(heapam_relation_set_new_filenode) does not care about whether the\ntable is specified with ONLY, or not. It just makes the data\nrepresented by the table empty (in transactional way).\n\nSo, how foreign tables shall perform?\n\nOnce ExecuteTruncate() picked up a foreign table, according to\nONLY-clause, does FDW driver shall consider\nthe context where the foreign tables are specified? And, what behavior\nis consistent?\nI think that FDW driver shall make the external data represented by\nthe foreign table empty, regardless of the\nstructure, location, volume and others.\n\nTherefore, if we follow the above assumption, we don't need to inform\nthe context where foreign-tables are\npicked up (TRUNCATE_REL_CONTEXT_*), so postgres_fdw shall not control\nthe remote TRUNCATE query\naccording to the flags. It always truncate the entire tables (if\nmultiple) on behalf of the foreign tables.\n\nAs an aside, if postgres_fdw maps are remote table with \"ONLY\" clause,\nit is exactly a situation where we add\n\"ONLY\" clause on the truncate command, because it is a representation\nof the remote \"ONLY parent_table\" in\nthis case.\n\nHow about your thought?\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Fri, 9 Apr 2021 12:33:07 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/09 11:05, Zhihong Yu wrote:\n> \n> \n> On Thu, Apr 8, 2021 at 6:47 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com <mailto:bharath.rupireddyforpostgres@gmail.com>> wrote:\n> \n> On Thu, Apr 8, 2021 at 6:44 PM Fujii Masao <masao.fujii@oss.nttdata.com <mailto:masao.fujii@oss.nttdata.com>> wrote:\n> > The followings are the open items and discussion points that I'm thinking of.\n> >\n> > 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n> \n> I think we should remove the unused enums/macros, instead we could\n> mention a note of the extensibility of those enums/macros in the\n> comments section around the enum/macro definitions.\n\n+1\n\n\n> \n> > 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n> \n> IMO, the foreign truncate command should be constructed by collecting\n> all the information i.e. \"TRUNCATE ft, ONLY ft\" and let the remote\n> server execute how it wants to execute. That will be consistent and no\n> extra logic is required to track the already seen foreign tables while\n> foreign table collection/foreign truncate command is being prepared on\n> the local server.\n\nBut isn't it difficult for remote server to determine how to execute? Please imagine the case where there are four tables as follows.\n\n- regular table \"remote_parent\" in the remote server\n- regular table \"remote_child\" inheriting \"remote_parent\" table in the remote server\n- foreign table \"local_parent\" in the local server, accessing \"remote_parent\" table\n- regular table \"local_child\" inheriting \"local_parent\" table in the local server\n\nWhen \"TRUNCATE ONLY local_parent, local_parent\" is executed, local_child is not truncated because of ONLY clause. Then if we collect all the information about context, both TRUNCATE_REL_CONTEXT_NORMAL and _ONLY are passed to FDW. In this case how should FDW determine whether to use ONLY when issuing TRUNCATE command to the remote server? Isn't it difficult to do that? If FDW determines not to use ONLY because _NORMAL flag is passed, both remote_parent and remote_child tables are truncated. That is, though both local_child and remote_child are the inheriting tables, isn't it strange that only the former is ignored and the latter is truncated?\n\n\n> \n> I was thinking that the postgres throws error or warning for commands\n> such as truncate, vaccum, analyze when the same tables are specified,\n> but seems like that's not what it does.\n> \n> > 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n> \n> I'm okay with the behaviour as it is consistent with what ONLY does to\n> local tables. Documenting this behaviour(if not done already) is a\n> better way I think.\n\n+1\n\n\n> \n> > 4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n> \n> It will be good to have.\n\nPatch attached.\n\n\n> \n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> \n> \n> w.r.t. point #1:\n> bq. I think we should remove the unused enums/macros,\n> \n> I agree. When there is more concrete use case which requires new enum, we can add enum whose meaning would be clearer.\n\n+1\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 9 Apr 2021 22:36:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/09 12:33, Kohei KaiGai wrote:\n> 2021年4月8日(木) 22:14 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>>\n>> On 2021/04/08 22:02, Kohei KaiGai wrote:\n>>>> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n>>\n>> Pushed! Thank all involved in this development!!\n>> For record, I attached the final patch I committed.\n>>\n>>\n>>> Ok, it's fair enought for me.\n>>>\n>>> I'll try to sort out my thought, then raise a follow-up discussion if necessary.\n>>\n>> Thanks!\n>>\n>> The followings are the open items and discussion points that I'm thinking of.\n>>\n>> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n>>\n>> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n>>\n>> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n>>\n> Prior to the discussion of 1-3, I like to clarify the role of foreign-tables.\n> (Likely, it will lead a natural conclusion for the above open items.)\n> \n> As literal of SQL/MED (Management of External Data), a foreign table\n> is a representation of external data in PostgreSQL.\n> It allows to read and (optionally) write the external data wrapped by\n> FDW drivers, as if we usually read / write heap tables.\n> By the FDW-APIs, the core PostgreSQL does not care about the\n> structure, location, volume and other characteristics of\n> the external data itself. It expects FDW-APIs invocation will perform\n> as if we access a regular heap table.\n> \n> On the other hands, we can say local tables are representation of\n> \"internal\" data in PostgreSQL.\n> A heap table is consists of one or more files (per BLCKSZ *\n> RELSEG_SIZE), and table-am intermediates\n> the on-disk data to/from on-memory structure (TupleTableSlot).\n> Here are no big differences in the concept. Ok?\n> \n> As you know, ONLY clause controls whether TRUNCATE command shall run\n> on child-tables also, not only the parent.\n> If \"ONLY parent_table\" is given, its child tables are not picked up by\n> ExecuteTruncate(), unless child tables are not\n> listed up individually.\n> Then, once ExecuteTruncate() picked up the relations, it makes the\n> relations empty using table-am\n> (relation_set_new_filenode), and the callee\n> (heapam_relation_set_new_filenode) does not care about whether the\n> table is specified with ONLY, or not. It just makes the data\n> represented by the table empty (in transactional way).\n> \n> So, how foreign tables shall perform?\n> \n> Once ExecuteTruncate() picked up a foreign table, according to\n> ONLY-clause, does FDW driver shall consider\n> the context where the foreign tables are specified? And, what behavior\n> is consistent?\n> I think that FDW driver shall make the external data represented by\n> the foreign table empty, regardless of the\n> structure, location, volume and others.\n> \n> Therefore, if we follow the above assumption, we don't need to inform\n> the context where foreign-tables are\n> picked up (TRUNCATE_REL_CONTEXT_*), so postgres_fdw shall not control\n> the remote TRUNCATE query\n> according to the flags. It always truncate the entire tables (if\n> multiple) on behalf of the foreign tables.\n\nThis makes me wonder if the information about CASCADE/RESTRICT (maybe also RESTART/CONTINUE) also should not be passed to FDW. You're thinking that? Or only ONLY clause should be ignored for a foreign table?\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 9 Apr 2021 22:50:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Fri, Apr 9, 2021 at 7:06 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > 4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n> >\n> > It will be good to have.\n>\n> Patch attached.\n\nTab completion patch LGTM and it works as expected.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Apr 2021 19:40:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Fri, Apr 9, 2021 at 7:06 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > > 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n> >\n> > IMO, the foreign truncate command should be constructed by collecting\n> > all the information i.e. \"TRUNCATE ft, ONLY ft\" and let the remote\n> > server execute how it wants to execute. That will be consistent and no\n> > extra logic is required to track the already seen foreign tables while\n> > foreign table collection/foreign truncate command is being prepared on\n> > the local server.\n>\n> But isn't it difficult for remote server to determine how to execute? Please imagine the case where there are four tables as follows.\n>\n> - regular table \"remote_parent\" in the remote server\n> - regular table \"remote_child\" inheriting \"remote_parent\" table in the remote server\n> - foreign table \"local_parent\" in the local server, accessing \"remote_parent\" table\n> - regular table \"local_child\" inheriting \"local_parent\" table in the local server\n>\n> When \"TRUNCATE ONLY local_parent, local_parent\" is executed, local_child is not truncated because of ONLY clause. Then if we collect all the information about context, both TRUNCATE_REL_CONTEXT_NORMAL and _ONLY are passed to FDW. In this case how should FDW determine whether to use ONLY when issuing TRUNCATE command to the remote server? Isn't it difficult to do that? If FDW determines not to use ONLY because _NORMAL flag is passed, both remote_parent and remote_child tables are truncated. That is, though both local_child and remote_child are the inheriting tables, isn't it strange that only the former is ignored and the latter is truncated?\n\nMy understanding was wrong. I see below code from ExecuteTruncate:\n /* don't throw error for \"TRUNCATE foo, foo\" */\n if (list_member_oid(relids, myrelid))\n {\n table_close(rel, lockmode);\n continue;\n }\n\nThis basically tells us that the first occurence of a table is\nconsidered, rest all ignored. This is what we are going to have in our\nrelids_extra and relids. So, we will be sending only the first\noccurence info to the foreign truncate command. I agree with the\ncurrent approach \"i.e., collect the extra info about table found first\nif the same table is specified multiple times\" for the same reason\nthat \"local tables are also treated the same way.\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 9 Apr 2021 20:14:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月9日(金) 22:51 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/09 12:33, Kohei KaiGai wrote:\n> > 2021年4月8日(木) 22:14 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >>\n> >> On 2021/04/08 22:02, Kohei KaiGai wrote:\n> >>>> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n> >>\n> >> Pushed! Thank all involved in this development!!\n> >> For record, I attached the final patch I committed.\n> >>\n> >>\n> >>> Ok, it's fair enought for me.\n> >>>\n> >>> I'll try to sort out my thought, then raise a follow-up discussion if necessary.\n> >>\n> >> Thanks!\n> >>\n> >> The followings are the open items and discussion points that I'm thinking of.\n> >>\n> >> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n> >>\n> >> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n> >>\n> >> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n> >>\n> > Prior to the discussion of 1-3, I like to clarify the role of foreign-tables.\n> > (Likely, it will lead a natural conclusion for the above open items.)\n> >\n> > As literal of SQL/MED (Management of External Data), a foreign table\n> > is a representation of external data in PostgreSQL.\n> > It allows to read and (optionally) write the external data wrapped by\n> > FDW drivers, as if we usually read / write heap tables.\n> > By the FDW-APIs, the core PostgreSQL does not care about the\n> > structure, location, volume and other characteristics of\n> > the external data itself. It expects FDW-APIs invocation will perform\n> > as if we access a regular heap table.\n> >\n> > On the other hands, we can say local tables are representation of\n> > \"internal\" data in PostgreSQL.\n> > A heap table is consists of one or more files (per BLCKSZ *\n> > RELSEG_SIZE), and table-am intermediates\n> > the on-disk data to/from on-memory structure (TupleTableSlot).\n> > Here are no big differences in the concept. Ok?\n> >\n> > As you know, ONLY clause controls whether TRUNCATE command shall run\n> > on child-tables also, not only the parent.\n> > If \"ONLY parent_table\" is given, its child tables are not picked up by\n> > ExecuteTruncate(), unless child tables are not\n> > listed up individually.\n> > Then, once ExecuteTruncate() picked up the relations, it makes the\n> > relations empty using table-am\n> > (relation_set_new_filenode), and the callee\n> > (heapam_relation_set_new_filenode) does not care about whether the\n> > table is specified with ONLY, or not. It just makes the data\n> > represented by the table empty (in transactional way).\n> >\n> > So, how foreign tables shall perform?\n> >\n> > Once ExecuteTruncate() picked up a foreign table, according to\n> > ONLY-clause, does FDW driver shall consider\n> > the context where the foreign tables are specified? And, what behavior\n> > is consistent?\n> > I think that FDW driver shall make the external data represented by\n> > the foreign table empty, regardless of the\n> > structure, location, volume and others.\n> >\n> > Therefore, if we follow the above assumption, we don't need to inform\n> > the context where foreign-tables are\n> > picked up (TRUNCATE_REL_CONTEXT_*), so postgres_fdw shall not control\n> > the remote TRUNCATE query\n> > according to the flags. It always truncate the entire tables (if\n> > multiple) on behalf of the foreign tables.\n>\n> This makes me wonder if the information about CASCADE/RESTRICT (maybe also RESTART/CONTINUE) also should not be passed to FDW. You're thinking that? Or only ONLY clause should be ignored for a foreign table?\n>\nI think the above information (DropBehavior and restart_seqs) are\nvaluable to pass.\n\nThe CASCADE/RESTRICT clause controls whether the truncate command also\neliminates\nthe rows that blocks to delete (FKs in RDBMS). Only FDW driver can\nknow whether the\nexternal data has \"removal-blocker\", thus we need to pass the\nDropBehavior for the callback.\n\nThe RESTART/CONTINUE clause also controle whether the truncate command restart\nthe relevant resources that is associated with the target table\n(Sequences in RDBMS).\nOnly FDW driver can know whether the external data has relevant\nresources to reset,\nthus we need to pass the \"restart_seqs\" for the callback.\n\nUnlike above two parameters, the role of ONLY-clause is already\nfinished at the time\nwhen ExecuteTruncate() picked up the target relations, from the\nstandpoint of above\nunderstanding of foreign-tables and external data.\n\nThought?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Fri, 9 Apr 2021 23:49:09 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 08, 2021 at 10:14:17PM +0900, Fujii Masao wrote:\n> On 2021/04/08 22:02, Kohei KaiGai wrote:\n> > > Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n> \n> Pushed! Thank all involved in this development!!\n> For record, I attached the final patch I committed.\n\nFind attached language fixes.\n\nI'm also proposing to convert an if/else to an switch(), since I don't like\n\"if/else if\" without an \"else\", and since the compiler can warn about missing\nenum values in switch cases. You could also write:\n| Assert(behavior == DROP_RESTRICT || behavior == DROP_CASCADE)\n\nAlso, you currently test:\n> +\t\tif (extra & TRUNCATE_REL_CONTEXT_ONLY)\n\nbut TRUNCATE_REL_ aren't indepedent bits, so shouldn't be tested with \"&\".\n\nsrc/include/commands/tablecmds.h-#define TRUNCATE_REL_CONTEXT_NORMAL 1 /* specified without ONLY clause */\nsrc/include/commands/tablecmds.h-#define TRUNCATE_REL_CONTEXT_ONLY 2 /* specified with ONLY clause */\nsrc/include/commands/tablecmds.h:#define TRUNCATE_REL_CONTEXT_CASCADING 3 /* not specified but truncated\nsrc/include/commands/tablecmds.h- * due to dependency (e.g.,\nsrc/include/commands/tablecmds.h- * partition table) */\n\n> +++ b/contrib/postgres_fdw/deparse.c\n> @@ -2172,6 +2173,43 @@ deparseAnalyzeSql(StringInfo buf, Relation rel, List **retrieved_attrs)\n> \tdeparseRelation(buf, rel);\n> }\n> \n> +/*\n> + * Construct a simple \"TRUNCATE rel\" statement\n> + */\n> +void\n> +deparseTruncateSql(StringInfo buf,\n> +\t\t\t\t List *rels,\n> +\t\t\t\t List *rels_extra,\n> +\t\t\t\t DropBehavior behavior,\n> +\t\t\t\t bool restart_seqs)\n> +{\n> +\tListCell *lc1,\n> +\t\t\t *lc2;\n> +\n> +\tappendStringInfoString(buf, \"TRUNCATE \");\n> +\n> +\tforboth(lc1, rels, lc2, rels_extra)\n> +\t{\n> +\t\tRelation\trel = lfirst(lc1);\n> +\t\tint\t\t\textra = lfirst_int(lc2);\n> +\n> +\t\tif (lc1 != list_head(rels))\n> +\t\t\tappendStringInfoString(buf, \", \");\n> +\t\tif (extra & TRUNCATE_REL_CONTEXT_ONLY)\n> +\t\t\tappendStringInfoString(buf, \"ONLY \");\n> +\n> +\t\tdeparseRelation(buf, rel);\n> +\t}\n> +\n> +\tappendStringInfo(buf, \" %s IDENTITY\",\n> +\t\t\t\t\t restart_seqs ? \"RESTART\" : \"CONTINUE\");\n> +\n> +\tif (behavior == DROP_RESTRICT)\n> +\t\tappendStringInfoString(buf, \" RESTRICT\");\n> +\telse if (behavior == DROP_CASCADE)\n> +\t\tappendStringInfoString(buf, \" CASCADE\");\n> +}", "msg_date": "Sat, 10 Apr 2021 23:16:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sun, Apr 11, 2021 at 9:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Find attached language fixes.\n\nThanks for the patches.\n\n> I'm also proposing to convert an if/else to an switch(), since I don't like\n> \"if/else if\" without an \"else\", and since the compiler can warn about missing\n> enum values in switch cases.\n\nI think we have a good bunch of if, else-if (without else) in the code\nbase, and I'm not sure the compilers have warned about them. Apart\nfrom that whether if-else or switch-case is just a coding choice. And\nwe have only two values for DropBehavior enum i.e DROP_RESTRICT and\nDROP_CASCADE(I'm not sure we will extend this enum to have more\nvalues), if we have more then switch case would have looked cleaner.\nBut IMO, existing if and else-if would be good. Having said that, it's\nup to the committer which one they think better in this case.\n\n> You could also write:\n> | Assert(behavior == DROP_RESTRICT || behavior == DROP_CASCADE)\n\nIMO, we don't need to assert on behaviour as we just carry that\nvariable from ExecuteTruncateGuts to postgresExecForeignTruncate\nwithout any modifications. And ExecuteTruncateGuts would get it from\nthe syntaxer, so no point it will have a different value than\nDROP_RESTRICT or DROP_CASCADE.\n\n> Also, you currently test:\n> > + if (extra & TRUNCATE_REL_CONTEXT_ONLY)\n>\n> but TRUNCATE_REL_ aren't indepedent bits, so shouldn't be tested with \"&\".\n\nYeah this is an issue. We could just change the #defines to values\n0x0001, 0x0002, 0x0004, 0x0008 ... 0x0020 and so on and then testing\nwith & would work. So, this way, more than option can be multiplexed\ninto the same int value. To multiplex, we need to think: will there be\na scenario where a single rel in the truncate can have multiple\noptions at a time and do we want to distinguish these options while\ndeparsing?\n\n#define TRUNCATE_REL_CONTEXT_NORMAL 0x0001 /* specified without\nONLY clause */\n#define TRUNCATE_REL_CONTEXT_ONLY 0x0002 /* specified with\nONLY clause */\n#define TRUNCATE_REL_CONTEXT_CASCADING 0x0004 /* not specified\nbut truncated\n\nAnd I'm not sure what's the agreement on retaining or removing #define\nvalues, currently I see only TRUNCATE_REL_CONTEXT_ONLY is being used,\nothers are just being set but not used. As I said upthread, it will be\ngood to remove the unused macros/enums, retain only the ones that are\nused, especially TRUNCATE_REL_CONTEXT_CASCADING this is not required I\nfeel, because we can add the child partitions that are foreign tables\nto relids as just normal foreign tables with TRUNCATE_REL_CONTEXT_ONLY\noption.\n\nOn the patches:\n0001-WIP-doc-review-Allow-TRUNCATE-command-to-truncate-fo.patch ---> LGTM.\n0002-Convert-an-if-else-if-without-an-else-to-a-switch.patch. --> IMO,\nwe can ignore this patch.\n0003-Test-integer-using-and-not.patch --> if we redefine the marcos to\nmultiplex them into a single int value, we don't need this patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 11 Apr 2021 15:45:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/11 19:15, Bharath Rupireddy wrote:\n> On Sun, Apr 11, 2021 at 9:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> Find attached language fixes.\n> \n> Thanks for the patches.\n\nThanks for the patches!\n\n0001 patch basically looks good to me.\n\n+ <literal>behavior</literal> must be specified as\n+ <literal>DROP_RESTRICT</literal> or <literal>DROP_CASCADE</literal>.\n+ If specified as <literal>DROP_RESTRICT</literal>, the\n+ <literal>RESTRICT</literal> option will be included in the\n <command>TRUNCATE</command> command.\n+ If specified as <literal>DROP_CASCADE</literal>, the\n+ <literal>CASCADE</literal> option will be included.\n\nMaybe \"will be included\" is confusing? Because FDW might not include\nthe RESTRICT in the TRUNCATE command that it will issue\nwhen DROP_RESTRICT is specified, for example. To be more precise,\nwe should document something like \"If specified as DROP_RESTRICT,\nthe RESTRICT option was included in the original TRUNCATE command\"?\n\n\n>> I'm also proposing to convert an if/else to an switch(), since I don't like\n>> \"if/else if\" without an \"else\", and since the compiler can warn about missing\n>> enum values in switch cases.\n> \n> I think we have a good bunch of if, else-if (without else) in the code\n> base, and I'm not sure the compilers have warned about them. Apart\n> from that whether if-else or switch-case is just a coding choice. And\n> we have only two values for DropBehavior enum i.e DROP_RESTRICT and\n> DROP_CASCADE(I'm not sure we will extend this enum to have more\n> values), if we have more then switch case would have looked cleaner.\n> But IMO, existing if and else-if would be good. Having said that, it's\n> up to the committer which one they think better in this case.\n\nEither works at least for me. Also for now I have no strong opinion\nto change the condition so that it uses switch().\n\n\n>> You could also write:\n>> | Assert(behavior == DROP_RESTRICT || behavior == DROP_CASCADE)\n> \n> IMO, we don't need to assert on behaviour as we just carry that\n> variable from ExecuteTruncateGuts to postgresExecForeignTruncate\n> without any modifications. And ExecuteTruncateGuts would get it from\n> the syntaxer, so no point it will have a different value than\n> DROP_RESTRICT or DROP_CASCADE.\n> \n>> Also, you currently test:\n>>> + if (extra & TRUNCATE_REL_CONTEXT_ONLY)\n>>\n>> but TRUNCATE_REL_ aren't indepedent bits, so shouldn't be tested with \"&\".\n\nYou're right.\n\n\n> Yeah this is an issue. We could just change the #defines to values\n> 0x0001, 0x0002, 0x0004, 0x0008 ... 0x0020 and so on and then testing\n> with & would work. So, this way, more than option can be multiplexed\n> into the same int value. To multiplex, we need to think: will there be\n> a scenario where a single rel in the truncate can have multiple\n> options at a time and do we want to distinguish these options while\n> deparsing?\n> \n> #define TRUNCATE_REL_CONTEXT_NORMAL 0x0001 /* specified without\n> ONLY clause */\n> #define TRUNCATE_REL_CONTEXT_ONLY 0x0002 /* specified with\n> ONLY clause */\n> #define TRUNCATE_REL_CONTEXT_CASCADING 0x0004 /* not specified\n> but truncated\n> \n> And I'm not sure what's the agreement on retaining or removing #define\n> values, currently I see only TRUNCATE_REL_CONTEXT_ONLY is being used,\n> others are just being set but not used. As I said upthread, it will be\n> good to remove the unused macros/enums, retain only the ones that are\n> used, especially TRUNCATE_REL_CONTEXT_CASCADING this is not required I\n> feel, because we can add the child partitions that are foreign tables\n> to relids as just normal foreign tables with TRUNCATE_REL_CONTEXT_ONLY\n> option.\n\nOur current consensus seems that TRUNCATE_REL_CONTEXT_NORMAL and\nTRUNCATE_REL_CONTEXT_CASCADING should be removed because they are not used.\nSince Kaigai-san thinks to remove the extra information at all,\nI guess he also agrees to remove those both TRUNCATE_REL_CONTEXT_NORMAL\nand _CASCADING. If this is right, we should apply 0003 patch and remove\nthose two macro values. Or we should make the extra info boolean value\ninstead of int, i.e., it indicates whether ONLY was specified or not.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 12 Apr 2021 20:31:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/09 23:10, Bharath Rupireddy wrote:\n> On Fri, Apr 9, 2021 at 7:06 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> > 4. Tab-completion for TRUNCATE should be updated so that also foreign tables are displayed.\n>>>\n>>> It will be good to have.\n>>\n>> Patch attached.\n> \n> Tab completion patch LGTM and it works as expected.\n\nThanks for the review! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 12 Apr 2021 21:36:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Sun, Apr 11, 2021 at 03:45:36PM +0530, Bharath Rupireddy wrote:\n> On Sun, Apr 11, 2021 at 9:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Also, you currently test:\n> > > + if (extra & TRUNCATE_REL_CONTEXT_ONLY)\n> >\n> > but TRUNCATE_REL_ aren't indepedent bits, so shouldn't be tested with \"&\".\n> \n> Yeah this is an issue. We could just change the #defines to values\n> 0x0001, 0x0002, 0x0004, 0x0008 ... 0x0020 and so on and then testing\n> with & would work. So, this way, more than option can be multiplexed\n> into the same int value. To multiplex, we need to think: will there be\n> a scenario where a single rel in the truncate can have multiple\n> options at a time and do we want to distinguish these options while\n> deparsing?\n> \n> #define TRUNCATE_REL_CONTEXT_NORMAL 0x0001 /* specified without\n> ONLY clause */\n> #define TRUNCATE_REL_CONTEXT_ONLY 0x0002 /* specified with\n> ONLY clause */\n> #define TRUNCATE_REL_CONTEXT_CASCADING 0x0004 /* not specified\n> but truncated\n> \n> And I'm not sure what's the agreement on retaining or removing #define\n> values, currently I see only TRUNCATE_REL_CONTEXT_ONLY is being used,\n> others are just being set but not used. As I said upthread, it will be\n> good to remove the unused macros/enums, retain only the ones that are\n> used, especially TRUNCATE_REL_CONTEXT_CASCADING this is not required I\n> feel, because we can add the child partitions that are foreign tables\n> to relids as just normal foreign tables with TRUNCATE_REL_CONTEXT_ONLY\n> option.\n\nConverting to \"bits\" would collapse TRUNCATE_REL_CONTEXT_ONLY and\nTRUNCATE_REL_CONTEXT_NORMAL into a single bit. TRUNCATE_REL_CONTEXT_CASCADING\ncould optionally be removed.\n\n+1 to convert to bits instead of changing \"&\" to \"==\".\n\n-- \nJustin", "msg_date": "Mon, 12 Apr 2021 19:57:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 13, 2021 at 6:27 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Apr 11, 2021 at 03:45:36PM +0530, Bharath Rupireddy wrote:\n> > On Sun, Apr 11, 2021 at 9:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > Also, you currently test:\n> > > > + if (extra & TRUNCATE_REL_CONTEXT_ONLY)\n> > >\n> > > but TRUNCATE_REL_ aren't indepedent bits, so shouldn't be tested with \"&\".\n> >\n> > Yeah this is an issue. We could just change the #defines to values\n> > 0x0001, 0x0002, 0x0004, 0x0008 ... 0x0020 and so on and then testing\n> > with & would work. So, this way, more than option can be multiplexed\n> > into the same int value. To multiplex, we need to think: will there be\n> > a scenario where a single rel in the truncate can have multiple\n> > options at a time and do we want to distinguish these options while\n> > deparsing?\n> >\n> > #define TRUNCATE_REL_CONTEXT_NORMAL 0x0001 /* specified without\n> > ONLY clause */\n> > #define TRUNCATE_REL_CONTEXT_ONLY 0x0002 /* specified with\n> > ONLY clause */\n> > #define TRUNCATE_REL_CONTEXT_CASCADING 0x0004 /* not specified\n> > but truncated\n> >\n> > And I'm not sure what's the agreement on retaining or removing #define\n> > values, currently I see only TRUNCATE_REL_CONTEXT_ONLY is being used,\n> > others are just being set but not used. As I said upthread, it will be\n> > good to remove the unused macros/enums, retain only the ones that are\n> > used, especially TRUNCATE_REL_CONTEXT_CASCADING this is not required I\n> > feel, because we can add the child partitions that are foreign tables\n> > to relids as just normal foreign tables with TRUNCATE_REL_CONTEXT_ONLY\n> > option.\n>\n> Converting to \"bits\" would collapse TRUNCATE_REL_CONTEXT_ONLY and\n> TRUNCATE_REL_CONTEXT_NORMAL into a single bit. TRUNCATE_REL_CONTEXT_CASCADING\n> could optionally be removed.\n>\n> +1 to convert to bits instead of changing \"&\" to \"==\".\n\nThanks for the patches.\n\nI agree to convert to bits and pass it as int value which is\nextensible i.e. we can pass more extra parameters across if required.\nAlso I'm not in favour of removing relids_extra altogether, we might\nneed this to send some info in future.\n\nBoth 0001 and 0002(along with the new phrasings) look good to me.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Apr 2021 06:51:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/13 10:21, Bharath Rupireddy wrote:\n> I agree to convert to bits and pass it as int value which is\n> extensible i.e. we can pass more extra parameters across if required.\n\nLooks good to me.\n\n\n> Also I'm not in favour of removing relids_extra altogether, we might\n> need this to send some info in future.\n> \n> Both 0001 and 0002(along with the new phrasings) look good to me.\n\nThanks for updating the patches!\n\nOne question is; \"CONTEXT\" of \"TRUNCATE_REL_CONTEXT_ONLY\" is required?\nIf not, I'm tempted to shorten the name to \"TRUNCATE_REL_ONLY\" or something.\n\n+ <structname>Relation</structname> data structures for each\n+ foreign tables to be truncated.\n\n\"foreign tables\" should be \"foreign table\" because it follows \"each\"?\n\n+ <para>\n+ <literal>behavior</literal> is either\n+ <literal>DROP_RESTRICT</literal> or <literal>DROP_CASCADE</literal>.\n+ <literal>DROP_CASCADE</literal> indicates that the\n+ <literal>CASCADE</literal> option was specified in the original\n <command>TRUNCATE</command> command.\n\nWhy did you remove the description for DROP_RESTRICT?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 13 Apr 2021 12:38:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 13, 2021 at 12:38:35PM +0900, Fujii Masao wrote:\n> + <structname>Relation</structname> data structures for each\n> + foreign tables to be truncated.\n> \n> \"foreign tables\" should be \"foreign table\" because it follows \"each\"?\n\nYes, you're right.\n\n> + <para>\n> + <literal>behavior</literal> is either\n> + <literal>DROP_RESTRICT</literal> or <literal>DROP_CASCADE</literal>.\n> + <literal>DROP_CASCADE</literal> indicates that the\n> + <literal>CASCADE</literal> option was specified in the original\n> <command>TRUNCATE</command> command.\n> \n> Why did you remove the description for DROP_RESTRICT?\n\nBecause in order to handle the default/unspecified case, the description was\ngoing to need to be something like:\n\n| DROP_RESTRICT indicates that the RESTRICT option was specified in the original\n| truncate command (or CASCADE option was NOT specified).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 12 Apr 2021 22:46:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月9日(金) 23:49 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> 2021年4月9日(金) 22:51 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >\n> > On 2021/04/09 12:33, Kohei KaiGai wrote:\n> > > 2021年4月8日(木) 22:14 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> > >>\n> > >> On 2021/04/08 22:02, Kohei KaiGai wrote:\n> > >>>> Anyway, attached is the updated version of the patch. This is still based on the latest Kazutaka-san's patch. That is, extra list for ONLY is still passed to FDW. What about committing this version at first? Then we can continue the discussion and change the behavior later if necessary.\n> > >>\n> > >> Pushed! Thank all involved in this development!!\n> > >> For record, I attached the final patch I committed.\n> > >>\n> > >>\n> > >>> Ok, it's fair enought for me.\n> > >>>\n> > >>> I'll try to sort out my thought, then raise a follow-up discussion if necessary.\n> > >>\n> > >> Thanks!\n> > >>\n> > >> The followings are the open items and discussion points that I'm thinking of.\n> > >>\n> > >> 1. Currently the extra information (TRUNCATE_REL_CONTEXT_NORMAL, TRUNCATE_REL_CONTEXT_ONLY or TRUNCATE_REL_CONTEXT_CASCADING) about how a foreign table was specified as the target to truncate in TRUNCATE command is collected and passed to FDW. Does this really need to be passed to FDW? Seems Stephen, Michael and I think that's necessary. But Kaigai-san does not. I also think that TRUNCATE_REL_CONTEXT_CASCADING can be removed because there seems no use case for that maybe.\n> > >>\n> > >> 2. Currently when the same foreign table is specified multiple times in the command, the extra information only for the foreign table found first is collected. For example, when \"TRUNCATE ft, ONLY ft\" is executed, TRUNCATE_REL_CONTEXT_NORMAL is collected and _ONLY is ignored because \"ft\" is found first. Is this OK? Or we should collect all, e.g., both _NORMAL and _ONLY should be collected in that example? I think that the current approach (i.e., collect the extra info about table found first if the same table is specified multiple times) is good because even local tables are also treated the same way. But Kaigai-san does not.\n> > >>\n> > >> 3. Currently postgres_fdw specifies ONLY clause in TRUNCATE command that it constructs. That is, if the foreign table is specified with ONLY, postgres_fdw also issues the TRUNCATE command for the corresponding remote table with ONLY to the remote server. Then only root table is truncated in remote server side, and the tables inheriting that are not truncated. Is this behavior desirable? Seems Michael and I think this behavior is OK. But Kaigai-san does not.\n> > >>\n> > > Prior to the discussion of 1-3, I like to clarify the role of foreign-tables.\n> > > (Likely, it will lead a natural conclusion for the above open items.)\n> > >\n> > > As literal of SQL/MED (Management of External Data), a foreign table\n> > > is a representation of external data in PostgreSQL.\n> > > It allows to read and (optionally) write the external data wrapped by\n> > > FDW drivers, as if we usually read / write heap tables.\n> > > By the FDW-APIs, the core PostgreSQL does not care about the\n> > > structure, location, volume and other characteristics of\n> > > the external data itself. It expects FDW-APIs invocation will perform\n> > > as if we access a regular heap table.\n> > >\n> > > On the other hands, we can say local tables are representation of\n> > > \"internal\" data in PostgreSQL.\n> > > A heap table is consists of one or more files (per BLCKSZ *\n> > > RELSEG_SIZE), and table-am intermediates\n> > > the on-disk data to/from on-memory structure (TupleTableSlot).\n> > > Here are no big differences in the concept. Ok?\n> > >\n> > > As you know, ONLY clause controls whether TRUNCATE command shall run\n> > > on child-tables also, not only the parent.\n> > > If \"ONLY parent_table\" is given, its child tables are not picked up by\n> > > ExecuteTruncate(), unless child tables are not\n> > > listed up individually.\n> > > Then, once ExecuteTruncate() picked up the relations, it makes the\n> > > relations empty using table-am\n> > > (relation_set_new_filenode), and the callee\n> > > (heapam_relation_set_new_filenode) does not care about whether the\n> > > table is specified with ONLY, or not. It just makes the data\n> > > represented by the table empty (in transactional way).\n> > >\n> > > So, how foreign tables shall perform?\n> > >\n> > > Once ExecuteTruncate() picked up a foreign table, according to\n> > > ONLY-clause, does FDW driver shall consider\n> > > the context where the foreign tables are specified? And, what behavior\n> > > is consistent?\n> > > I think that FDW driver shall make the external data represented by\n> > > the foreign table empty, regardless of the\n> > > structure, location, volume and others.\n> > >\n> > > Therefore, if we follow the above assumption, we don't need to inform\n> > > the context where foreign-tables are\n> > > picked up (TRUNCATE_REL_CONTEXT_*), so postgres_fdw shall not control\n> > > the remote TRUNCATE query\n> > > according to the flags. It always truncate the entire tables (if\n> > > multiple) on behalf of the foreign tables.\n> >\n> > This makes me wonder if the information about CASCADE/RESTRICT (maybe also RESTART/CONTINUE) also should not be passed to FDW. You're thinking that? Or only ONLY clause should be ignored for a foreign table?\n> >\n> I think the above information (DropBehavior and restart_seqs) are\n> valuable to pass.\n>\n> The CASCADE/RESTRICT clause controls whether the truncate command also\n> eliminates\n> the rows that blocks to delete (FKs in RDBMS). Only FDW driver can\n> know whether the\n> external data has \"removal-blocker\", thus we need to pass the\n> DropBehavior for the callback.\n>\n> The RESTART/CONTINUE clause also controle whether the truncate command restart\n> the relevant resources that is associated with the target table\n> (Sequences in RDBMS).\n> Only FDW driver can know whether the external data has relevant\n> resources to reset,\n> thus we need to pass the \"restart_seqs\" for the callback.\n>\n> Unlike above two parameters, the role of ONLY-clause is already\n> finished at the time\n> when ExecuteTruncate() picked up the target relations, from the\n> standpoint of above\n> understanding of foreign-tables and external data.\n>\n> Thought?\n>\nLet me remind the discussion at the design level.\n\nIf postgres_fdw (and other FDW drivers) needs to consider whether\nONLY-clause is given\non the foreign tables of them, what does a foreign table represent in\nPostgreSQL system?\n\nMy assumption is, a foreign table provides a view to external data, as\nif it performs like a table.\nTRUNCATE command eliminates all the segment files, even if a table\ncontains multiple\nunderlying files, never eliminate them partially.\nIf a foreign table is equivalent to a table in SQL operation level,\nindeed, ONLY-clause controls\nwhich tables are picked up by the TRUNCATE command, but never controls\nwhich portion of\nthe data shall be eliminated. So, I conclude that\nExecForeignTruncate() shall eliminate the entire\nexternal data on behalf of a foreign table, regardless of ONLY-clause.\n\nI think it is more significant to clarify prior to the implementation details.\nHow about your opinions?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Tue, 13 Apr 2021 14:22:35 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/13 12:46, Justin Pryzby wrote:\n> On Tue, Apr 13, 2021 at 12:38:35PM +0900, Fujii Masao wrote:\n>> + <structname>Relation</structname> data structures for each\n>> + foreign tables to be truncated.\n>>\n>> \"foreign tables\" should be \"foreign table\" because it follows \"each\"?\n> \n> Yes, you're right.\n> \n>> + <para>\n>> + <literal>behavior</literal> is either\n>> + <literal>DROP_RESTRICT</literal> or <literal>DROP_CASCADE</literal>.\n>> + <literal>DROP_CASCADE</literal> indicates that the\n>> + <literal>CASCADE</literal> option was specified in the original\n>> <command>TRUNCATE</command> command.\n>>\n>> Why did you remove the description for DROP_RESTRICT?\n> \n> Because in order to handle the default/unspecified case, the description was\n> going to need to be something like:\n> \n> | DROP_RESTRICT indicates that the RESTRICT option was specified in the original\n> | truncate command (or CASCADE option was NOT specified).\n\nWhat about using \"requested\" instead of \"specified\"? If neither RESTRICT nor\nCASCADE is specified, we can think that user requested the default behavior,\ni.e., RESTRICT. So, for example,\n\n <literal>behavior</literal> is either <literal>DROP_RESTRICT</literal> or\n <literal>DROP_CASCADE</literal>, which indicates that the\n <literal>RESTRICT</literal> or <literal>CASCADE</literal> option was\n requested in the original <command>TRUNCATE</command> command, respectively.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 13 Apr 2021 15:44:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/13 14:22, Kohei KaiGai wrote:\n> Let me remind the discussion at the design level.\n> \n> If postgres_fdw (and other FDW drivers) needs to consider whether\n> ONLY-clause is given\n> on the foreign tables of them, what does a foreign table represent in\n> PostgreSQL system?\n> \n> My assumption is, a foreign table provides a view to external data, as\n> if it performs like a table.\n> TRUNCATE command eliminates all the segment files, even if a table\n> contains multiple\n> underlying files, never eliminate them partially.\n> If a foreign table is equivalent to a table in SQL operation level,\n> indeed, ONLY-clause controls\n> which tables are picked up by the TRUNCATE command, but never controls\n> which portion of\n> the data shall be eliminated. So, I conclude that\n> ExecForeignTruncate() shall eliminate the entire\n> external data on behalf of a foreign table, regardless of ONLY-clause.\n> \n> I think it is more significant to clarify prior to the implementation details.\n> How about your opinions?\n\nI'm still thinking that it's better to pass all information including\nONLY clause about TRUNCATE command to FDW and leave FDW to determine\nhow to use them. How postgres_fdw should use the information about ONLY\nis debetable. But for now IMO that users who explicitly specify ONLY clause for\nforeign tables understand the structure of remote tables and want to use ONLY\nin TRUNCATE command issued by postgres_fdw. But my opinion might be minority,\nso I'd like to hear more opinion about this, from other developers.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 13 Apr 2021 16:17:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "At Tue, 13 Apr 2021 16:17:12 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/04/13 14:22, Kohei KaiGai wrote:\n> > Let me remind the discussion at the design level.\n> > If postgres_fdw (and other FDW drivers) needs to consider whether\n> > ONLY-clause is given\n> > on the foreign tables of them, what does a foreign table represent in\n> > PostgreSQL system?\n> > My assumption is, a foreign table provides a view to external data, as\n> > if it performs like a table.\n> > TRUNCATE command eliminates all the segment files, even if a table\n> > contains multiple\n> > underlying files, never eliminate them partially.\n> > If a foreign table is equivalent to a table in SQL operation level,\n> > indeed, ONLY-clause controls\n> > which tables are picked up by the TRUNCATE command, but never controls\n> > which portion of\n> > the data shall be eliminated. So, I conclude that\n> > ExecForeignTruncate() shall eliminate the entire\n> > external data on behalf of a foreign table, regardless of ONLY-clause.\n> > I think it is more significant to clarify prior to the implementation\n> > details.\n> > How about your opinions?\n> \n> I'm still thinking that it's better to pass all information including\n> ONLY clause about TRUNCATE command to FDW and leave FDW to determine\n> how to use them. How postgres_fdw should use the information about\n> ONLY\n> is debetable. But for now IMO that users who explicitly specify ONLY\n> clause for\n> foreign tables understand the structure of remote tables and want to\n> use ONLY\n> in TRUNCATE command issued by postgres_fdw. But my opinion might be\n> minority,\n> so I'd like to hear more opinion about this, from other developers.\n\n From the syntactical point of view, my opinion on this is that the\n\"ONLY\" in \"TRUNCATE ONLY\" is assumed to be consumed to tell to\ndisregard local children so it cannot be propagate further whichever\nthe target relation has children or not.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Apr 2021 17:29:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月13日(火) 16:17 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/13 14:22, Kohei KaiGai wrote:\n> > Let me remind the discussion at the design level.\n> >\n> > If postgres_fdw (and other FDW drivers) needs to consider whether\n> > ONLY-clause is given\n> > on the foreign tables of them, what does a foreign table represent in\n> > PostgreSQL system?\n> >\n> > My assumption is, a foreign table provides a view to external data, as\n> > if it performs like a table.\n> > TRUNCATE command eliminates all the segment files, even if a table\n> > contains multiple\n> > underlying files, never eliminate them partially.\n> > If a foreign table is equivalent to a table in SQL operation level,\n> > indeed, ONLY-clause controls\n> > which tables are picked up by the TRUNCATE command, but never controls\n> > which portion of\n> > the data shall be eliminated. So, I conclude that\n> > ExecForeignTruncate() shall eliminate the entire\n> > external data on behalf of a foreign table, regardless of ONLY-clause.\n> >\n> > I think it is more significant to clarify prior to the implementation details.\n> > How about your opinions?\n>\n> I'm still thinking that it's better to pass all information including\n> ONLY clause about TRUNCATE command to FDW and leave FDW to determine\n> how to use them. How postgres_fdw should use the information about ONLY\n> is debetable. But for now IMO that users who explicitly specify ONLY clause for\n> foreign tables understand the structure of remote tables and want to use ONLY\n> in TRUNCATE command issued by postgres_fdw. But my opinion might be minority,\n> so I'd like to hear more opinion about this, from other developers.\n>\nHere are two points to discuss.\n\nRegarding to the FDW-APIs, yes, nobody can deny someone want to implement\ntheir own FDW module that adds special handling when its foreign table\nis specified\nwith ONLY-clause, even if we usually ignore.\n\n\nOn the other hand, when we consider a foreign table is an abstraction\nof an external\ndata source, at least, the current postgres_fdw's behavior is not consistent.\n\nWhen a foreign table by postgres_fdw that maps a remote parent table,\nhas a local\nchild table,\n\nThis command shows all the rows from both of local and remote.\n\npostgres=# select * from f_table ;\n id | v\n----+-----------------------------\n 1 | remote table t_parent id=1\n 2 | remote table t_parent id=2\n 3 | remote table t_parent id=3\n 10 | remote table t_child1 id=10\n 11 | remote table t_child1 id=11\n 12 | remote table t_child1 id=12\n 20 | remote table t_child2 id=20\n 21 | remote table t_child2 id=21\n 22 | remote table t_child2 id=22\n 50 | it is l_child id=50\n 51 | it is l_child id=51\n 52 | it is l_child id=52\n 53 | it is l_child id=53\n(13 rows)\n\nIf f_table is specified with \"ONLY\", it picks up only the parent table\n(f_table),\nhowever, ONLY-clause is not push down to the remote side.\n\npostgres=# select * from only f_table ;\n id | v\n----+-----------------------------\n 1 | remote table t_parent id=1\n 2 | remote table t_parent id=2\n 3 | remote table t_parent id=3\n 10 | remote table t_child1 id=10\n 11 | remote table t_child1 id=11\n 12 | remote table t_child1 id=12\n 20 | remote table t_child2 id=20\n 21 | remote table t_child2 id=21\n 22 | remote table t_child2 id=22\n(9 rows)\n\nOn the other hands, TRUNCATE ONLY f_table works as follows...\n\npostgres=# truncate only f_table;\nTRUNCATE TABLE\npostgres=# select * from f_table ;\n id | v\n----+-----------------------------\n 10 | remote table t_child1 id=10\n 11 | remote table t_child1 id=11\n 12 | remote table t_child1 id=12\n 20 | remote table t_child2 id=20\n 21 | remote table t_child2 id=21\n 22 | remote table t_child2 id=22\n 50 | it is l_child id=50\n 51 | it is l_child id=51\n 52 | it is l_child id=52\n 53 | it is l_child id=53\n(10 rows)\n\nIt eliminates the rows only from the remote parent table although it\nis a part of the foreign table.\n\nMy expectation at the above command shows rows from the local child\ntable (id=50...53).\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Tue, 13 Apr 2021 18:07:25 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 13, 2021 at 2:37 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> Here are two points to discuss.\n>\n> Regarding to the FDW-APIs, yes, nobody can deny someone want to implement\n> their own FDW module that adds special handling when its foreign table\n> is specified\n> with ONLY-clause, even if we usually ignore.\n>\n>\n> On the other hand, when we consider a foreign table is an abstraction\n> of an external\n> data source, at least, the current postgres_fdw's behavior is not consistent.\n>\n> When a foreign table by postgres_fdw that maps a remote parent table,\n> has a local\n> child table,\n>\n> This command shows all the rows from both of local and remote.\n>\n> postgres=# select * from f_table ;\n> id | v\n> ----+-----------------------------\n> 1 | remote table t_parent id=1\n> 2 | remote table t_parent id=2\n> 3 | remote table t_parent id=3\n> 10 | remote table t_child1 id=10\n> 11 | remote table t_child1 id=11\n> 12 | remote table t_child1 id=12\n> 20 | remote table t_child2 id=20\n> 21 | remote table t_child2 id=21\n> 22 | remote table t_child2 id=22\n> 50 | it is l_child id=50\n> 51 | it is l_child id=51\n> 52 | it is l_child id=52\n> 53 | it is l_child id=53\n> (13 rows)\n>\n> If f_table is specified with \"ONLY\", it picks up only the parent table\n> (f_table),\n> however, ONLY-clause is not push down to the remote side.\n>\n> postgres=# select * from only f_table ;\n> id | v\n> ----+-----------------------------\n> 1 | remote table t_parent id=1\n> 2 | remote table t_parent id=2\n> 3 | remote table t_parent id=3\n> 10 | remote table t_child1 id=10\n> 11 | remote table t_child1 id=11\n> 12 | remote table t_child1 id=12\n> 20 | remote table t_child2 id=20\n> 21 | remote table t_child2 id=21\n> 22 | remote table t_child2 id=22\n> (9 rows)\n>\n> On the other hands, TRUNCATE ONLY f_table works as follows...\n>\n> postgres=# truncate only f_table;\n> TRUNCATE TABLE\n> postgres=# select * from f_table ;\n> id | v\n> ----+-----------------------------\n> 10 | remote table t_child1 id=10\n> 11 | remote table t_child1 id=11\n> 12 | remote table t_child1 id=12\n> 20 | remote table t_child2 id=20\n> 21 | remote table t_child2 id=21\n> 22 | remote table t_child2 id=22\n> 50 | it is l_child id=50\n> 51 | it is l_child id=51\n> 52 | it is l_child id=52\n> 53 | it is l_child id=53\n> (10 rows)\n>\n> It eliminates the rows only from the remote parent table although it\n> is a part of the foreign table.\n>\n> My expectation at the above command shows rows from the local child\n> table (id=50...53).\n\nYeah, ONLY clause is not pushed to the remote server in case of SELECT\ncommands. This is also true for DELETE and UPDATE commands on foreign\ntables. I'm not sure if it wasn't thought necessary or if there is an\nissue to push it or I may be missing something here. I think we can\nstart a separate thread to see other hackers' opinions on this.\n\nI'm not sure whether all the clauses that are possible for\nSELECT/UPDATE/DELETE/INSERT with local tables are pushed to the remote\nserver by postgres_fdw.\n\nWell, now foreign TRUNCATE pushes the ONLY clause to the remote server\nwhich is inconsistent when compared to SELECT/UPDATE/DELETE commands.\nIf we were to keep it consistent across all foreign commands that\nONLY clause is not pushed to remote server, then we can restrict for\nTRUNCATE too and even if \"TRUNCATE ONLY foreign_tbl\" is specified,\njust pass \"TRUNCATE foreign_tbl\" to remote server. Having said that, I\ndon't see any real problem in pushing the ONLY clause, at least in\ncase of TRUNCATE.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 13 Apr 2021 17:33:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月13日(火) 21:03 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Tue, Apr 13, 2021 at 2:37 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > Here are two points to discuss.\n> >\n> > Regarding to the FDW-APIs, yes, nobody can deny someone want to implement\n> > their own FDW module that adds special handling when its foreign table\n> > is specified\n> > with ONLY-clause, even if we usually ignore.\n> >\n> >\n> > On the other hand, when we consider a foreign table is an abstraction\n> > of an external\n> > data source, at least, the current postgres_fdw's behavior is not consistent.\n> >\n> > When a foreign table by postgres_fdw that maps a remote parent table,\n> > has a local\n> > child table,\n> >\n> > This command shows all the rows from both of local and remote.\n> >\n> > postgres=# select * from f_table ;\n> > id | v\n> > ----+-----------------------------\n> > 1 | remote table t_parent id=1\n> > 2 | remote table t_parent id=2\n> > 3 | remote table t_parent id=3\n> > 10 | remote table t_child1 id=10\n> > 11 | remote table t_child1 id=11\n> > 12 | remote table t_child1 id=12\n> > 20 | remote table t_child2 id=20\n> > 21 | remote table t_child2 id=21\n> > 22 | remote table t_child2 id=22\n> > 50 | it is l_child id=50\n> > 51 | it is l_child id=51\n> > 52 | it is l_child id=52\n> > 53 | it is l_child id=53\n> > (13 rows)\n> >\n> > If f_table is specified with \"ONLY\", it picks up only the parent table\n> > (f_table),\n> > however, ONLY-clause is not push down to the remote side.\n> >\n> > postgres=# select * from only f_table ;\n> > id | v\n> > ----+-----------------------------\n> > 1 | remote table t_parent id=1\n> > 2 | remote table t_parent id=2\n> > 3 | remote table t_parent id=3\n> > 10 | remote table t_child1 id=10\n> > 11 | remote table t_child1 id=11\n> > 12 | remote table t_child1 id=12\n> > 20 | remote table t_child2 id=20\n> > 21 | remote table t_child2 id=21\n> > 22 | remote table t_child2 id=22\n> > (9 rows)\n> >\n> > On the other hands, TRUNCATE ONLY f_table works as follows...\n> >\n> > postgres=# truncate only f_table;\n> > TRUNCATE TABLE\n> > postgres=# select * from f_table ;\n> > id | v\n> > ----+-----------------------------\n> > 10 | remote table t_child1 id=10\n> > 11 | remote table t_child1 id=11\n> > 12 | remote table t_child1 id=12\n> > 20 | remote table t_child2 id=20\n> > 21 | remote table t_child2 id=21\n> > 22 | remote table t_child2 id=22\n> > 50 | it is l_child id=50\n> > 51 | it is l_child id=51\n> > 52 | it is l_child id=52\n> > 53 | it is l_child id=53\n> > (10 rows)\n> >\n> > It eliminates the rows only from the remote parent table although it\n> > is a part of the foreign table.\n> >\n> > My expectation at the above command shows rows from the local child\n> > table (id=50...53).\n>\n> Yeah, ONLY clause is not pushed to the remote server in case of SELECT\n> commands. This is also true for DELETE and UPDATE commands on foreign\n> tables. I'm not sure if it wasn't thought necessary or if there is an\n> issue to push it or I may be missing something here. I think we can\n> start a separate thread to see other hackers' opinions on this.\n>\n> I'm not sure whether all the clauses that are possible for\n> SELECT/UPDATE/DELETE/INSERT with local tables are pushed to the remote\n> server by postgres_fdw.\n>\n> Well, now foreign TRUNCATE pushes the ONLY clause to the remote server\n> which is inconsistent when compared to SELECT/UPDATE/DELETE commands.\n> If we were to keep it consistent across all foreign commands that\n> ONLY clause is not pushed to remote server, then we can restrict for\n> TRUNCATE too and even if \"TRUNCATE ONLY foreign_tbl\" is specified,\n> just pass \"TRUNCATE foreign_tbl\" to remote server. Having said that, I\n> don't see any real problem in pushing the ONLY clause, at least in\n> case of TRUNCATE.\n>\nIf ONLY-clause would be pushed down to the remote query of postgres_fdw,\nwhat does the foreign-table represent in the local system?\n\nIn my understanding, a local foreign table by postgres_fdw is a\nrepresentation of\nentire tree of the remote parent table and its children.\nThus, we have assumed that DML command fetches rows from the remote\nparent table without ONLY-clause, once PostgreSQL picked up the foreign table\nas a scan target.\nI think we don't need to adjust definitions of the role of\nforeign-table, even if\nit represents non-RDBMS data sources.\n\nIf a foreign table by postgres_fdw supports a special table option to\nindicate adding\nONLY-clause when remote query uses remote tables, it is suitable to\nadd ONLY-clause\non the remote TRUNCATE command also, not only SELECT/INSERT/UPDATE/DELETE.\nIn the other words, if a foreign-table represents only a remote parent\ntable, it is\nsuitable to truncate only the remote parent table.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Tue, 13 Apr 2021 23:25:38 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/13 23:25, Kohei KaiGai wrote:\n> 2021年4月13日(火) 21:03 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>> Yeah, ONLY clause is not pushed to the remote server in case of SELECT\n>> commands. This is also true for DELETE and UPDATE commands on foreign\n>> tables.\n\nThis sounds reasonable reason why ONLY should be ignored in TRUNCATE on\nforeign tables, for now. If there is the existing rule about how to treat\nONLY clause for foreign tables, basically TRUNCATE should follow that at this\nstage. Maybe we can change the rule, but it's an item for v15 or later?\n\n\n>> I'm not sure if it wasn't thought necessary or if there is an\n>> issue to push it or I may be missing something here.\n\nI could not find the past discussion about foreign tables and ONLY clause.\nI guess that ONLY is ignored in SELECT on foreign tables case because ONLY\nis interpreted outside the executor and it's not easy to change the executor\nso that ONLY is passed to FDW. Maybe..\n\n\n>> I think we can\n>> start a separate thread to see other hackers' opinions on this.\n>>\n>> I'm not sure whether all the clauses that are possible for\n>> SELECT/UPDATE/DELETE/INSERT with local tables are pushed to the remote\n>> server by postgres_fdw.\n>>\n>> Well, now foreign TRUNCATE pushes the ONLY clause to the remote server\n>> which is inconsistent when compared to SELECT/UPDATE/DELETE commands.\n>> If we were to keep it consistent across all foreign commands that\n>> ONLY clause is not pushed to remote server, then we can restrict for\n>> TRUNCATE too and even if \"TRUNCATE ONLY foreign_tbl\" is specified,\n>> just pass \"TRUNCATE foreign_tbl\" to remote server. Having said that, I\n>> don't see any real problem in pushing the ONLY clause, at least in\n>> case of TRUNCATE.\n>>\n> If ONLY-clause would be pushed down to the remote query of postgres_fdw,\n> what does the foreign-table represent in the local system?\n> \n> In my understanding, a local foreign table by postgres_fdw is a\n> representation of\n> entire tree of the remote parent table and its children.\n\nIf so, I'm still wondering why CASCADE/RESTRICT (i.e., DropBehavior) needs to\nbe passed to FDW. IOW, if a foreign table is an abstraction of an external\ndata source, ISTM that postgres_fdw should always issue TRUNCATE with\nCASCADE. Why do we need to allow RESTRICT to be specified for a foreign table\neven though it's an abstraction of an external data source?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 14 Apr 2021 00:00:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 13, 2021 at 8:30 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/04/13 23:25, Kohei KaiGai wrote:\n> > 2021年4月13日(火) 21:03 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n> >> Yeah, ONLY clause is not pushed to the remote server in case of SELECT\n> >> commands. This is also true for DELETE and UPDATE commands on foreign\n> >> tables.\n>\n> This sounds reasonable reason why ONLY should be ignored in TRUNCATE on\n> foreign tables, for now. If there is the existing rule about how to treat\n> ONLY clause for foreign tables, basically TRUNCATE should follow that at this\n> stage. Maybe we can change the rule, but it's an item for v15 or later?\n>\n> >> I'm not sure if it wasn't thought necessary or if there is an\n> >> issue to push it or I may be missing something here.\n>\n> I could not find the past discussion about foreign tables and ONLY clause.\n> I guess that ONLY is ignored in SELECT on foreign tables case because ONLY\n> is interpreted outside the executor and it's not easy to change the executor\n> so that ONLY is passed to FDW. Maybe..\n>\n>\n> >> I think we can\n> >> start a separate thread to see other hackers' opinions on this.\n> >>\n> >> I'm not sure whether all the clauses that are possible for\n> >> SELECT/UPDATE/DELETE/INSERT with local tables are pushed to the remote\n> >> server by postgres_fdw.\n> >>\n> >> Well, now foreign TRUNCATE pushes the ONLY clause to the remote server\n> >> which is inconsistent when compared to SELECT/UPDATE/DELETE commands.\n> >> If we were to keep it consistent across all foreign commands that\n> >> ONLY clause is not pushed to remote server, then we can restrict for\n> >> TRUNCATE too and even if \"TRUNCATE ONLY foreign_tbl\" is specified,\n> >> just pass \"TRUNCATE foreign_tbl\" to remote server. Having said that, I\n> >> don't see any real problem in pushing the ONLY clause, at least in\n> >> case of TRUNCATE.\n> >>\n> > If ONLY-clause would be pushed down to the remote query of postgres_fdw,\n> > what does the foreign-table represent in the local system?\n> >\n> > In my understanding, a local foreign table by postgres_fdw is a\n> > representation of\n> > entire tree of the remote parent table and its children.\n>\n> If so, I'm still wondering why CASCADE/RESTRICT (i.e., DropBehavior) needs to\n> be passed to FDW. IOW, if a foreign table is an abstraction of an external\n> data source, ISTM that postgres_fdw should always issue TRUNCATE with\n> CASCADE. Why do we need to allow RESTRICT to be specified for a foreign table\n> even though it's an abstraction of an external data source?\n\nIMHO, we can push all the TRUNCATE options (ONLY, RESTRICTED, CASCADE,\nRESTART/CONTINUE IDENTITY), because it doesn't have any major\nchallenge(implementation wise) unlike pushing some clauses in\nSELECT/UPDATE/DELETE and we already do this on the master. It doesn't\nlook good and may confuse users, if we push some options and restrict\nothers. We should have an explicit note in the documentation saying we\npush all these options to the remote server. We can leave it to the\nuser to write TRUNCATE for foreign tables with the appropriate\noptions. If somebody complains about a problem that they will face\nwith this behavior, we can revisit. This is my opinion, others may\ndisagree.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Apr 2021 09:24:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "2021年4月14日(水) 0:00 Fujii Masao <masao.fujii@oss.nttdata.com>:\n>\n> On 2021/04/13 23:25, Kohei KaiGai wrote:\n> > 2021年4月13日(火) 21:03 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n> >> Yeah, ONLY clause is not pushed to the remote server in case of SELECT\n> >> commands. This is also true for DELETE and UPDATE commands on foreign\n> >> tables.\n>\n> This sounds reasonable reason why ONLY should be ignored in TRUNCATE on\n> foreign tables, for now. If there is the existing rule about how to treat\n> ONLY clause for foreign tables, basically TRUNCATE should follow that at this\n> stage. Maybe we can change the rule, but it's an item for v15 or later?\n>\n>\n> >> I'm not sure if it wasn't thought necessary or if there is an\n> >> issue to push it or I may be missing something here.\n>\n> I could not find the past discussion about foreign tables and ONLY clause.\n> I guess that ONLY is ignored in SELECT on foreign tables case because ONLY\n> is interpreted outside the executor and it's not easy to change the executor\n> so that ONLY is passed to FDW. Maybe..\n>\n>\n> >> I think we can\n> >> start a separate thread to see other hackers' opinions on this.\n> >>\n> >> I'm not sure whether all the clauses that are possible for\n> >> SELECT/UPDATE/DELETE/INSERT with local tables are pushed to the remote\n> >> server by postgres_fdw.\n> >>\n> >> Well, now foreign TRUNCATE pushes the ONLY clause to the remote server\n> >> which is inconsistent when compared to SELECT/UPDATE/DELETE commands.\n> >> If we were to keep it consistent across all foreign commands that\n> >> ONLY clause is not pushed to remote server, then we can restrict for\n> >> TRUNCATE too and even if \"TRUNCATE ONLY foreign_tbl\" is specified,\n> >> just pass \"TRUNCATE foreign_tbl\" to remote server. Having said that, I\n> >> don't see any real problem in pushing the ONLY clause, at least in\n> >> case of TRUNCATE.\n> >>\n> > If ONLY-clause would be pushed down to the remote query of postgres_fdw,\n> > what does the foreign-table represent in the local system?\n> >\n> > In my understanding, a local foreign table by postgres_fdw is a\n> > representation of\n> > entire tree of the remote parent table and its children.\n>\n> If so, I'm still wondering why CASCADE/RESTRICT (i.e., DropBehavior) needs to\n> be passed to FDW. IOW, if a foreign table is an abstraction of an external\n> data source, ISTM that postgres_fdw should always issue TRUNCATE with\n> CASCADE. Why do we need to allow RESTRICT to be specified for a foreign table\n> even though it's an abstraction of an external data source?\n>\nPlease assume the internal heap data is managed by PostgreSQL core, and\nexternal data source is managed by postgres_fdw (or other FDW driver).\nTRUNCATE command requires these object managers to eliminate the data\non behalf of the foreign tables picked up.\n\nEven though the object manager tries to eliminate the managed data, it may be\nrestricted by some reason; FK restrictions in case of PostgreSQL internal data.\nIn this case, CASCADE/RESTRICT option suggests the object manager how\nto handle the target data.\n\nThe ONLY clause controls whoes data shall be eliminated.\nOn the other hand, CASCADE/RESTRICT and CONTINUE/RESTART controls\nhow data shall be eliminated. It is a primitive difference.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n", "msg_date": "Wed, 14 Apr 2021 13:17:55 +0900", "msg_from": "Kohei KaiGai <kaigai@heterodb.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "At Wed, 14 Apr 2021 13:17:55 +0900, Kohei KaiGai <kaigai@heterodb.com> wrote in \n> 2021年4月14日(水) 0:00 Fujii Masao <masao.fujii@oss.nttdata.com>:\n> >\n> > On 2021/04/13 23:25, Kohei KaiGai wrote:\n> > > 2021年4月13日(火) 21:03 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n> > >> Yeah, ONLY clause is not pushed to the remote server in case of SELECT\n> > >> commands. This is also true for DELETE and UPDATE commands on foreign\n> > >> tables.\n> >\n> > This sounds reasonable reason why ONLY should be ignored in TRUNCATE on\n> > foreign tables, for now. If there is the existing rule about how to treat\n> > ONLY clause for foreign tables, basically TRUNCATE should follow that at this\n> > stage. Maybe we can change the rule, but it's an item for v15 or later?\n> >\n> >\n> > >> I'm not sure if it wasn't thought necessary or if there is an\n> > >> issue to push it or I may be missing something here.\n> >\n> > I could not find the past discussion about foreign tables and ONLY clause.\n> > I guess that ONLY is ignored in SELECT on foreign tables case because ONLY\n> > is interpreted outside the executor and it's not easy to change the executor\n> > so that ONLY is passed to FDW. Maybe..\n> >\n> >\n> > >> I think we can\n> > >> start a separate thread to see other hackers' opinions on this.\n> > >>\n> > >> I'm not sure whether all the clauses that are possible for\n> > >> SELECT/UPDATE/DELETE/INSERT with local tables are pushed to the remote\n> > >> server by postgres_fdw.\n> > >>\n> > >> Well, now foreign TRUNCATE pushes the ONLY clause to the remote server\n> > >> which is inconsistent when compared to SELECT/UPDATE/DELETE commands.\n> > >> If we were to keep it consistent across all foreign commands that\n> > >> ONLY clause is not pushed to remote server, then we can restrict for\n> > >> TRUNCATE too and even if \"TRUNCATE ONLY foreign_tbl\" is specified,\n> > >> just pass \"TRUNCATE foreign_tbl\" to remote server. Having said that, I\n> > >> don't see any real problem in pushing the ONLY clause, at least in\n> > >> case of TRUNCATE.\n> > >>\n> > > If ONLY-clause would be pushed down to the remote query of postgres_fdw,\n> > > what does the foreign-table represent in the local system?\n> > >\n> > > In my understanding, a local foreign table by postgres_fdw is a\n> > > representation of\n> > > entire tree of the remote parent table and its children.\n> >\n> > If so, I'm still wondering why CASCADE/RESTRICT (i.e., DropBehavior) needs to\n> > be passed to FDW. IOW, if a foreign table is an abstraction of an external\n> > data source, ISTM that postgres_fdw should always issue TRUNCATE with\n> > CASCADE. Why do we need to allow RESTRICT to be specified for a foreign table\n> > even though it's an abstraction of an external data source?\n> >\n> Please assume the internal heap data is managed by PostgreSQL core, and\n> external data source is managed by postgres_fdw (or other FDW driver).\n> TRUNCATE command requires these object managers to eliminate the data\n> on behalf of the foreign tables picked up.\n> \n> Even though the object manager tries to eliminate the managed data, it may be\n> restricted by some reason; FK restrictions in case of PostgreSQL internal data.\n> In this case, CASCADE/RESTRICT option suggests the object manager how\n> to handle the target data.\n> \n> The ONLY clause controls whoes data shall be eliminated.\n> On the other hand, CASCADE/RESTRICT and CONTINUE/RESTART controls\n> how data shall be eliminated. It is a primitive difference.\n\nI object to unconditionally push ONLY to remote. As Kaigai-san said\nthat it works an apparent wrong way when a user wants to truncate only\nthe specified foreign table in a inheritance tree and there's no way to\navoid the behavior.\n\nI also don't think it is right to push down CASCADE/RESTRICT. The\noptions suggest to propagate truncation to *local* referrer tables\nfrom the *foreign* table, not to the remote referrer tables from the\noriginal table on remote. If a user want to allow that behavior it\nshould be specified by foreign table options. (It is bothersome when\nsomeone wants to specify the behavior on-the-fly.)\n\nalter foreign table ft1 options (add truncate_cascade 'true');\n\nAlso, CONTINUE/RESTART IDENTITY should not work since foreign tables\ndon't have an identity-sequence. However, this we might be able to\npush down the options since it affects only the target table.\n\nI would accept that behavior if TRUNCATE were \"TRUNCATE FOREIGN\nTABLE\", which explicitly targets a foreign table. But I'm not sure it\nis possible to add such syntax reasonable way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Apr 2021 13:41:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/14 12:54, Bharath Rupireddy wrote:\n> IMHO, we can push all the TRUNCATE options (ONLY, RESTRICTED, CASCADE,\n> RESTART/CONTINUE IDENTITY), because it doesn't have any major\n> challenge(implementation wise) unlike pushing some clauses in\n> SELECT/UPDATE/DELETE and we already do this on the master. It doesn't\n> look good and may confuse users, if we push some options and restrict\n> others. We should have an explicit note in the documentation saying we\n> push all these options to the remote server. We can leave it to the\n> user to write TRUNCATE for foreign tables with the appropriate\n> options. If somebody complains about a problem that they will face\n> with this behavior, we can revisit.\n\nThat's one of the options. But I'm afraid it's hard to drop (revisit)\nthe feature once it has been released. So if there is no explicit\nuse case for that, basically I'd like to drop that before release\nlike we agree to drop unused TRUNCATE_REL_CONTEXT_CASCADING.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 15 Apr 2021 23:49:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/14 13:41, Kyotaro Horiguchi wrote:\n> At Wed, 14 Apr 2021 13:17:55 +0900, Kohei KaiGai <kaigai@heterodb.com> wrote in\n>> Please assume the internal heap data is managed by PostgreSQL core, and\n>> external data source is managed by postgres_fdw (or other FDW driver).\n>> TRUNCATE command requires these object managers to eliminate the data\n>> on behalf of the foreign tables picked up.\n>>\n>> Even though the object manager tries to eliminate the managed data, it may be\n>> restricted by some reason; FK restrictions in case of PostgreSQL internal data.\n>> In this case, CASCADE/RESTRICT option suggests the object manager how\n>> to handle the target data.\n>>\n>> The ONLY clause controls whoes data shall be eliminated.\n>> On the other hand, CASCADE/RESTRICT and CONTINUE/RESTART controls\n>> how data shall be eliminated. It is a primitive difference.\n\nI have a different view on this classification. IMO ONLY and RESTRICT/CASCADE\nshould be categorized into the same group. Because both options specify\nwhether to truncate dependent tables or not. If we treat a foreign table as\nan abstraction of external data source, ISTM that we should not take care of\ntable dependancy in the remote server. IOW, we should truncate entire\nexternal data source, i.e., postgres_fdw should push neither ONLY nor\nRESTRICT down to the remote server.\n\n\n> I object to unconditionally push ONLY to remote. As Kaigai-san said\n> that it works an apparent wrong way when a user wants to truncate only\n> the specified foreign table in a inheritance tree and there's no way to\n> avoid the behavior.\n> \n> I also don't think it is right to push down CASCADE/RESTRICT. The\n> options suggest to propagate truncation to *local* referrer tables\n> from the *foreign* table, not to the remote referrer tables from the\n> original table on remote.\n\nAgreed.\n\n\n> If a user want to allow that behavior it\n> should be specified by foreign table options. (It is bothersome when\n> someone wants to specify the behavior on-the-fly.)\n> \n> alter foreign table ft1 options (add truncate_cascade 'true');\n\nMaybe. I think this is the item for v15 or later.\n\n\n> Also, CONTINUE/RESTART IDENTITY should not work since foreign tables\n> don't have an identity-sequence. However, this we might be able to\n> push down the options since it affects only the target table.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 16 Apr 2021 00:08:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 15, 2021 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/04/14 12:54, Bharath Rupireddy wrote:\n> > IMHO, we can push all the TRUNCATE options (ONLY, RESTRICTED, CASCADE,\n> > RESTART/CONTINUE IDENTITY), because it doesn't have any major\n> > challenge(implementation wise) unlike pushing some clauses in\n> > SELECT/UPDATE/DELETE and we already do this on the master. It doesn't\n> > look good and may confuse users, if we push some options and restrict\n> > others. We should have an explicit note in the documentation saying we\n> > push all these options to the remote server. We can leave it to the\n> > user to write TRUNCATE for foreign tables with the appropriate\n> > options. If somebody complains about a problem that they will face\n> > with this behavior, we can revisit.\n>\n> That's one of the options. But I'm afraid it's hard to drop (revisit)\n> the feature once it has been released. So if there is no explicit\n> use case for that, basically I'd like to drop that before release\n> like we agree to drop unused TRUNCATE_REL_CONTEXT_CASCADING.\n\nThanks. Looks like the decision is going in the direction of\nrestricting those options, I will withdraw my point.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 16 Apr 2021 05:45:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/16 9:15, Bharath Rupireddy wrote:\n> On Thu, Apr 15, 2021 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/04/14 12:54, Bharath Rupireddy wrote:\n>>> IMHO, we can push all the TRUNCATE options (ONLY, RESTRICTED, CASCADE,\n>>> RESTART/CONTINUE IDENTITY), because it doesn't have any major\n>>> challenge(implementation wise) unlike pushing some clauses in\n>>> SELECT/UPDATE/DELETE and we already do this on the master. It doesn't\n>>> look good and may confuse users, if we push some options and restrict\n>>> others. We should have an explicit note in the documentation saying we\n>>> push all these options to the remote server. We can leave it to the\n>>> user to write TRUNCATE for foreign tables with the appropriate\n>>> options. If somebody complains about a problem that they will face\n>>> with this behavior, we can revisit.\n>>\n>> That's one of the options. But I'm afraid it's hard to drop (revisit)\n>> the feature once it has been released. So if there is no explicit\n>> use case for that, basically I'd like to drop that before release\n>> like we agree to drop unused TRUNCATE_REL_CONTEXT_CASCADING.\n> \n> Thanks. Looks like the decision is going in the direction of\n> restricting those options, I will withdraw my point.\n\nWe are still discussing whether RESTRICT option should be pushed down to\na foreign data wrapper. But ISTM at least we could reach the consensus about\nthe drop of extra information for each foreign table. So what about applying\nthe attached patch and remove the extra information at first?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 16 Apr 2021 11:54:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "At Fri, 16 Apr 2021 11:54:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2021/04/16 9:15, Bharath Rupireddy wrote:\n> > On Thu, Apr 15, 2021 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n> > wrote:\n> >> On 2021/04/14 12:54, Bharath Rupireddy wrote:\n> >>> IMHO, we can push all the TRUNCATE options (ONLY, RESTRICTED, CASCADE,\n> >>> RESTART/CONTINUE IDENTITY), because it doesn't have any major\n> >>> challenge(implementation wise) unlike pushing some clauses in\n> >>> SELECT/UPDATE/DELETE and we already do this on the master. It doesn't\n> >>> look good and may confuse users, if we push some options and restrict\n> >>> others. We should have an explicit note in the documentation saying we\n> >>> push all these options to the remote server. We can leave it to the\n> >>> user to write TRUNCATE for foreign tables with the appropriate\n> >>> options. If somebody complains about a problem that they will face\n> >>> with this behavior, we can revisit.\n> >>\n> >> That's one of the options. But I'm afraid it's hard to drop (revisit)\n> >> the feature once it has been released. So if there is no explicit\n> >> use case for that, basically I'd like to drop that before release\n> >> like we agree to drop unused TRUNCATE_REL_CONTEXT_CASCADING.\n> > Thanks. Looks like the decision is going in the direction of\n> > restricting those options, I will withdraw my point.\n> \n> We are still discussing whether RESTRICT option should be pushed down to\n> a foreign data wrapper. But ISTM at least we could reach the consensus about\n> the drop of extra information for each foreign table. So what about applying\n> the attached patch and remove the extra information at first?\n\nI'm fine with that direction. Thanks for the patch.\n\nThe change is straight-forward and looks fine, except the following\npart.\n\n==== contrib/postgres_fdw/sql/postgres_fdw.sql: 2436 -- after patching\n2436> -- in case when remote table has inherited children\n2437> CREATE TABLE tru_rtable0_child () INHERITS (tru_rtable0);\n2438> INSERT INTO tru_rtable0 (SELECT x FROM generate_series(5,9) x);\n2439> INSERT INTO tru_rtable0_child (SELECT x FROM generate_series(10,14) x);\n2440> SELECT sum(id) FROM tru_ftable; -- 95\n2441>\n2442> TRUNCATE ONLY tru_ftable;\t\t-- truncate both parent and child\n2443> SELECT count(*) FROM tru_ftable; -- 0\n2444>\n2445> INSERT INTO tru_rtable0 (SELECT x FROM generate_series(21,25) x);\n2446> SELECT sum(id) FROM tru_ftable;\t\t-- 115\n2447> TRUNCATE tru_ftable;\t\t\t-- truncate both of parent and child\n2448> SELECT count(*) FROM tru_ftable; -- 0\n\nL2445-L2448 doesn't work as described since L2445 inserts tuples only\nto the parent.\n\nAnd there's a slight difference for no reason between the comment at\n2442 and 2447.\n\n(The attached is a fix on top of the proposed patch.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out\nindex 1a3f5cb4ad..d32f291089 100644\n--- a/contrib/postgres_fdw/expected/postgres_fdw.out\n+++ b/contrib/postgres_fdw/expected/postgres_fdw.out\n@@ -8388,7 +8388,7 @@ SELECT sum(id) FROM tru_ftable; -- 95\n 95\n (1 row)\n \n-TRUNCATE ONLY tru_ftable;\t\t-- truncate both parent and child\n+TRUNCATE ONLY tru_ftable;\t\t-- truncate both of parent and child\n SELECT count(*) FROM tru_ftable; -- 0\n count \n -------\n@@ -8396,10 +8396,11 @@ SELECT count(*) FROM tru_ftable; -- 0\n (1 row)\n \n INSERT INTO tru_rtable0 (SELECT x FROM generate_series(21,25) x);\n-SELECT sum(id) FROM tru_ftable;\t\t-- 115\n+INSERT INTO tru_rtable0_child (SELECT x FROM generate_series(26,30) x);\n+SELECT sum(id) FROM tru_ftable;\t\t-- 255\n sum \n -----\n- 115\n+ 255\n (1 row)\n \n TRUNCATE tru_ftable;\t\t\t-- truncate both of parent and child\ndiff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql\nindex 97c156a472..65643e120d 100644\n--- a/contrib/postgres_fdw/sql/postgres_fdw.sql\n+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql\n@@ -2439,11 +2439,12 @@ INSERT INTO tru_rtable0 (SELECT x FROM generate_series(5,9) x);\n INSERT INTO tru_rtable0_child (SELECT x FROM generate_series(10,14) x);\n SELECT sum(id) FROM tru_ftable; -- 95\n \n-TRUNCATE ONLY tru_ftable;\t\t-- truncate both parent and child\n+TRUNCATE ONLY tru_ftable;\t\t-- truncate both of parent and child\n SELECT count(*) FROM tru_ftable; -- 0\n \n INSERT INTO tru_rtable0 (SELECT x FROM generate_series(21,25) x);\n-SELECT sum(id) FROM tru_ftable;\t\t-- 115\n+INSERT INTO tru_rtable0_child (SELECT x FROM generate_series(26,30) x);\n+SELECT sum(id) FROM tru_ftable;\t\t-- 255\n TRUNCATE tru_ftable;\t\t\t-- truncate both of parent and child\n SELECT count(*) FROM tru_ftable; -- 0", "msg_date": "Fri, 16 Apr 2021 14:20:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Fri, Apr 16, 2021 at 8:24 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> We are still discussing whether RESTRICT option should be pushed down to\n> a foreign data wrapper. But ISTM at least we could reach the consensus about\n> the drop of extra information for each foreign table. So what about applying\n> the attached patch and remove the extra information at first?\n\nThanks for the patch, here are some comments:\n\n1) Maybe new empty lines would be better so that the code doesn't look\ncluttered:\n relids = lappend_oid(relids, myrelid); --> a new line after this.\n /* Log this relation only if needed for logical decoding */\n if (RelationIsLogicallyLogged(rel))\n\n relids = lappend_oid(relids, childrelid); --> a new line after this.\n /* Log this relation only if needed for logical decoding */\n\n relids = lappend_oid(relids, relid); --> a new line after this.\n /* Log this relation only if needed for logical decoding */\n if (RelationIsLogicallyLogged(rel))\n\n2) Instead of\n on foreign tables. <literal>rels</literal> is the list of\n <structname>Relation</structname> data structure that indicates\n a foreign table to truncate.\n\nI think it is better with:\n on foreign tables. <literal>rels</literal> is the list of\n <structname>Relation</structname> data structures, where each\n entry indicates a foreign table to truncate.\n\n3) How about adding an extra para(after below para in\npostgres_fdw.sgml) on WHY we don't push \"ONLY\" to foreign tables while\ntruncating? We could add to the same para for other options if at all\nwe don't choose to push them.\n <command>DELETE</command>, or <command>TRUNCATE</command>.\n (Of course, the remote user you have specified in your user mapping must\n have privileges to do these things.)\n\n4) Isn't it better to mention the \"ONLY\" option is not pushed to remote\n-- truncate with ONLY clause\nTRUNCATE ONLY tru_ftable_parent;\n\nTRUNCATE ONLY tru_ftable; -- truncate both parent and child\nSELECT count(*) FROM tru_ftable; -- 0\n\n5) I may be missing something here, why is even after ONLY is ignored\nin the below truncate command, the sum is 126? Shouldn't it truncate\nboth tru_ftable_parent and\n-- truncate with ONLY clause\nTRUNCATE ONLY tru_ftable_parent;\nSELECT sum(id) FROM tru_ftable_parent; -- 126\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 16 Apr 2021 11:43:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/16 14:20, Kyotaro Horiguchi wrote:\n> At Fri, 16 Apr 2021 11:54:16 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> On 2021/04/16 9:15, Bharath Rupireddy wrote:\n>>> On Thu, Apr 15, 2021 at 8:19 PM Fujii Masao <masao.fujii@oss.nttdata.com>\n>>> wrote:\n>>>> On 2021/04/14 12:54, Bharath Rupireddy wrote:\n>>>>> IMHO, we can push all the TRUNCATE options (ONLY, RESTRICTED, CASCADE,\n>>>>> RESTART/CONTINUE IDENTITY), because it doesn't have any major\n>>>>> challenge(implementation wise) unlike pushing some clauses in\n>>>>> SELECT/UPDATE/DELETE and we already do this on the master. It doesn't\n>>>>> look good and may confuse users, if we push some options and restrict\n>>>>> others. We should have an explicit note in the documentation saying we\n>>>>> push all these options to the remote server. We can leave it to the\n>>>>> user to write TRUNCATE for foreign tables with the appropriate\n>>>>> options. If somebody complains about a problem that they will face\n>>>>> with this behavior, we can revisit.\n>>>>\n>>>> That's one of the options. But I'm afraid it's hard to drop (revisit)\n>>>> the feature once it has been released. So if there is no explicit\n>>>> use case for that, basically I'd like to drop that before release\n>>>> like we agree to drop unused TRUNCATE_REL_CONTEXT_CASCADING.\n>>> Thanks. Looks like the decision is going in the direction of\n>>> restricting those options, I will withdraw my point.\n>>\n>> We are still discussing whether RESTRICT option should be pushed down to\n>> a foreign data wrapper. But ISTM at least we could reach the consensus about\n>> the drop of extra information for each foreign table. So what about applying\n>> the attached patch and remove the extra information at first?\n> \n> I'm fine with that direction. Thanks for the patch.\n> \n> The change is straight-forward and looks fine, except the following\n> part.\n> \n> ==== contrib/postgres_fdw/sql/postgres_fdw.sql: 2436 -- after patching\n> 2436> -- in case when remote table has inherited children\n> 2437> CREATE TABLE tru_rtable0_child () INHERITS (tru_rtable0);\n> 2438> INSERT INTO tru_rtable0 (SELECT x FROM generate_series(5,9) x);\n> 2439> INSERT INTO tru_rtable0_child (SELECT x FROM generate_series(10,14) x);\n> 2440> SELECT sum(id) FROM tru_ftable; -- 95\n> 2441>\n> 2442> TRUNCATE ONLY tru_ftable;\t\t-- truncate both parent and child\n> 2443> SELECT count(*) FROM tru_ftable; -- 0\n> 2444>\n> 2445> INSERT INTO tru_rtable0 (SELECT x FROM generate_series(21,25) x);\n> 2446> SELECT sum(id) FROM tru_ftable;\t\t-- 115\n> 2447> TRUNCATE tru_ftable;\t\t\t-- truncate both of parent and child\n> 2448> SELECT count(*) FROM tru_ftable; -- 0\n> \n> L2445-L2448 doesn't work as described since L2445 inserts tuples only\n> to the parent.\n> \n> And there's a slight difference for no reason between the comment at\n> 2442 and 2447.\n\nAgreed. Thanks!\n\n\n> (The attached is a fix on top of the proposed patch.)\n\nI will include this patch into the main patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 21 Apr 2021 23:41:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/16 15:13, Bharath Rupireddy wrote:\n> On Fri, Apr 16, 2021 at 8:24 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> We are still discussing whether RESTRICT option should be pushed down to\n>> a foreign data wrapper. But ISTM at least we could reach the consensus about\n>> the drop of extra information for each foreign table. So what about applying\n>> the attached patch and remove the extra information at first?\n> \n> Thanks for the patch, here are some comments:\n\nThanks for the review!\n\n> \n> 1) Maybe new empty lines would be better so that the code doesn't look\n> cluttered:\n> relids = lappend_oid(relids, myrelid); --> a new line after this.\n> /* Log this relation only if needed for logical decoding */\n> if (RelationIsLogicallyLogged(rel))\n> \n> relids = lappend_oid(relids, childrelid); --> a new line after this.\n> /* Log this relation only if needed for logical decoding */\n> \n> relids = lappend_oid(relids, relid); --> a new line after this.\n> /* Log this relation only if needed for logical decoding */\n> if (RelationIsLogicallyLogged(rel))\n\nApplied. Attached is the updated version of the patch\n(truncate_foreign_table_dont_pass_only_clause_v2.patch).\nThis patch includes the patch that Horiguchi-san posted upthead.\nI'm thinking to commit this patch at first.\n\n\n\n> 2) Instead of\n> on foreign tables. <literal>rels</literal> is the list of\n> <structname>Relation</structname> data structure that indicates\n> a foreign table to truncate.\n> \n> I think it is better with:\n> on foreign tables. <literal>rels</literal> is the list of\n> <structname>Relation</structname> data structures, where each\n> entry indicates a foreign table to truncate.\n\nJustin posted the patch that improves the documents including\nthis description. I think that we should revisit that patch.\nAttached is the updated version of that patch.\n(truncate_foreign_table_docs_v1.patch)\n\n\n> 3) How about adding an extra para(after below para in\n> postgres_fdw.sgml) on WHY we don't push \"ONLY\" to foreign tables while\n> truncating? We could add to the same para for other options if at all\n> we don't choose to push them.\n> <command>DELETE</command>, or <command>TRUNCATE</command>.\n> (Of course, the remote user you have specified in your user mapping must\n> have privileges to do these things.)\n\nI agree to document the behavior that ONLY option is always ignored\nfor foreign tables. But I'm not sure if we can document WHY.\nBecause I could not find the past discussion about why ONLY option is\nignored on SELECT, etc... Maybe it's enough to document the behavior?\n\n\n> 4) Isn't it better to mention the \"ONLY\" option is not pushed to remote\n> -- truncate with ONLY clause\n> TRUNCATE ONLY tru_ftable_parent;\n> \n> TRUNCATE ONLY tru_ftable; -- truncate both parent and child\n> SELECT count(*) FROM tru_ftable; -- 0\n> \n> 5) I may be missing something here, why is even after ONLY is ignored\n> in the below truncate command, the sum is 126? Shouldn't it truncate\n> both tru_ftable_parent and\n> -- truncate with ONLY clause\n> TRUNCATE ONLY tru_ftable_parent;\n> SELECT sum(id) FROM tru_ftable_parent; -- 126\n\nBecause TRUNCATE ONLY command doesn't truncate tru_ftable_child talbe\nthat inehrits tru_ftable_parent. No?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 22 Apr 2021 00:01:35 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Wed, Apr 21, 2021 at 8:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Applied. Attached is the updated version of the patch\n> (truncate_foreign_table_dont_pass_only_clause_v2.patch).\n> This patch includes the patch that Horiguchi-san posted upthead.\n> I'm thinking to commit this patch at first.\n\n+1.\n\n> > 2) Instead of\n> > on foreign tables. <literal>rels</literal> is the list of\n> > <structname>Relation</structname> data structure that indicates\n> > a foreign table to truncate.\n> >\n> > I think it is better with:\n> > on foreign tables. <literal>rels</literal> is the list of\n> > <structname>Relation</structname> data structures, where each\n> > entry indicates a foreign table to truncate.\n>\n> Justin posted the patch that improves the documents including\n> this description. I think that we should revisit that patch.\n> Attached is the updated version of that patch.\n> (truncate_foreign_table_docs_v1.patch)\n\nOne comment on truncate_foreign_table_docs_v1.patch:\n1) I think it is \"to be truncated\"\n+ <literal>rels</literal> is a list of <structname>Relation</structname>\n+ data structures for each foreign table to truncated.\nHow about a slightly changed phrasing like below?\n+ <literal>rels</literal> is a list of <structname>Relation</structname>\n+ data structures of foreign tables to truncate.\n\nOther than above, the patch LGTM.\n\n> > 3) How about adding an extra para(after below para in\n> > postgres_fdw.sgml) on WHY we don't push \"ONLY\" to foreign tables while\n> > truncating? We could add to the same para for other options if at all\n> > we don't choose to push them.\n> > <command>DELETE</command>, or <command>TRUNCATE</command>.\n> > (Of course, the remote user you have specified in your user mapping must\n> > have privileges to do these things.)\n>\n> I agree to document the behavior that ONLY option is always ignored\n> for foreign tables. But I'm not sure if we can document WHY.\n> Because I could not find the past discussion about why ONLY option is\n> ignored on SELECT, etc... Maybe it's enough to document the behavior?\n\n+1 to specify in the documentation about ONLY option is always\nignored. But can we specify the WHY part within deparseTruncateSql, it\nwill be there for developer reference? I feel it's better if this\nchange goes with truncate_foreign_table_dont_pass_only_clause_v2.patch\n\n> > 4) Isn't it better to mention the \"ONLY\" option is not pushed to remote\n> > -- truncate with ONLY clause\n> > TRUNCATE ONLY tru_ftable_parent;\n> >\n> > TRUNCATE ONLY tru_ftable; -- truncate both parent and child\n> > SELECT count(*) FROM tru_ftable; -- 0\n> >\n> > 5) I may be missing something here, why is even after ONLY is ignored\n> > in the below truncate command, the sum is 126? Shouldn't it truncate\n> > both tru_ftable_parent and\n> > -- truncate with ONLY clause\n> > TRUNCATE ONLY tru_ftable_parent;\n> > SELECT sum(id) FROM tru_ftable_parent; -- 126\n>\n> Because TRUNCATE ONLY command doesn't truncate tru_ftable_child talbe\n> that inehrits tru_ftable_parent. No?\n\nI get it. tru_ftable_child is a local child so ONLY doesn't truncate it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 06:09:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/22 9:39, Bharath Rupireddy wrote:\n> One comment on truncate_foreign_table_docs_v1.patch:\n> 1) I think it is \"to be truncated\"\n> + <literal>rels</literal> is a list of <structname>Relation</structname>\n> + data structures for each foreign table to truncated.\n\nFixed. Thanks!\n\n> How about a slightly changed phrasing like below?\n> + <literal>rels</literal> is a list of <structname>Relation</structname>\n> + data structures of foreign tables to truncate.\nEither works at least for me. If you think that this phrasing is\nmore precise or better, I'm ok with that and will update the patch again.\n\n\n> Other than above, the patch LGTM.\n> \n>>> 3) How about adding an extra para(after below para in\n>>> postgres_fdw.sgml) on WHY we don't push \"ONLY\" to foreign tables while\n>>> truncating? We could add to the same para for other options if at all\n>>> we don't choose to push them.\n>>> <command>DELETE</command>, or <command>TRUNCATE</command>.\n>>> (Of course, the remote user you have specified in your user mapping must\n>>> have privileges to do these things.)\n>>\n>> I agree to document the behavior that ONLY option is always ignored\n>> for foreign tables. But I'm not sure if we can document WHY.\n>> Because I could not find the past discussion about why ONLY option is\n>> ignored on SELECT, etc... Maybe it's enough to document the behavior?\n> \n> +1 to specify in the documentation about ONLY option is always\n> ignored.\n\nAdded.\n\n\n> But can we specify the WHY part within deparseTruncateSql, it\n> will be there for developer reference? I feel it's better if this\n> change goes with truncate_foreign_table_dont_pass_only_clause_v2.patch\n\nI added this information into fdwhandler.sgml because the developers\nusually read fdwhandler.sgml.\n\n\n>>> 4) Isn't it better to mention the \"ONLY\" option is not pushed to remote\n>>> -- truncate with ONLY clause\n>>> TRUNCATE ONLY tru_ftable_parent;\n>>>\n>>> TRUNCATE ONLY tru_ftable; -- truncate both parent and child\n>>> SELECT count(*) FROM tru_ftable; -- 0\n\nI added the comment.\n\n\nCould you review the attached patches?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 22 Apr 2021 15:36:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 22, 2021 at 03:36:25PM +0900, Fujii Masao wrote:\n> diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml\n> index 553524553b..69aa66e73e 100644\n> --- a/doc/src/sgml/fdwhandler.sgml\n> +++ b/doc/src/sgml/fdwhandler.sgml\n> @@ -1076,27 +1076,25 @@ ExecForeignTruncate(List *rels,\n> bool restart_seqs);\n> <para>\n> - <literal>behavior</literal> defines how foreign tables should\n> - be truncated, using as possible values <literal>DROP_RESTRICT</literal>,\n> - which means that <literal>RESTRICT</literal> option is specified,\n> - and <literal>DROP_CASCADE</literal>, which means that\n> - <literal>CASCADE</literal> option is specified, in\n> - <command>TRUNCATE</command> command.\n> + <literal>behavior</literal> is either <literal>DROP_RESTRICT</literal>\n> + or <literal>DROP_CASCADE</literal>, which indicates that the\n> + <literal>RESTRICT</literal> or <literal>CASCADE</literal> option was\n> + requested in the original <command>TRUNCATE</command> command,\n> + respectively.\n\nNow that I reread this, I would change \"which indicates\" to \"indicating\".\n\n> - <literal>restart_seqs</literal> is set to <literal>true</literal>\n> - if <literal>RESTART IDENTITY</literal> option is specified in\n> - <command>TRUNCATE</command> command. It is <literal>false</literal>\n> - if <literal>CONTINUE IDENTITY</literal> option is specified.\n> + If <literal>restart_seqs</literal> is <literal>true</literal>,\n> + the original <command>TRUNCATE</command> command requested the\n> + <literal>RESTART IDENTITY</literal> option, otherwise\n> + <literal>CONTINUE IDENTITY</literal> option.\n\nshould it say \"specified\" instead of requested ?\nOr should it say \"requested the RESTART IDENTITY behavior\" ?\n\nAlso, I think it should say \"..otherwise, the CONTINUE IDENTITY behavior was\nrequested\".\n\n> +++ b/doc/src/sgml/ref/truncate.sgml\n> @@ -173,7 +173,7 @@ TRUNCATE [ TABLE ] [ ONLY ] <replaceable class=\"parameter\">name</replaceable> [\n> \n> <para>\n> <command>TRUNCATE</command> can be used for foreign tables if\n> - the foreign data wrapper supports, for instance,\n> + supported by the foreign data wrapper, for instance,\n> see <xref linkend=\"postgres-fdw\"/>.\n\nwhat does \"for instance\" mean here? I think it should be removed.\n\n> +++ b/doc/src/sgml/fdwhandler.sgml\n> @@ -1111,6 +1099,15 @@ ExecForeignTruncate(List *rels, List *rels_extra,\n> if <literal>CONTINUE IDENTITY</literal> option is specified.\n> </para>\n> \n> + <para>\n> + Note that information about <literal>ONLY</literal> options specified\n> + in the original <command>TRUNCATE</command> command is not passed to\n> + <function>ExecForeignTruncate</function>. This is the same behavior as\n> + for the callback functions for <command>SELECT</command>,\n> + <command>UPDATE</command> and <command>DELETE</command> on\n\nThere's an extra space before DELETE\n\n> diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> index 5320accf6f..d03731b7d4 100644\n> --- a/doc/src/sgml/postgres-fdw.sgml\n> +++ b/doc/src/sgml/postgres-fdw.sgml\n> @@ -69,6 +69,13 @@\n> have privileges to do these things.)\n> </para>\n> \n> + <para>\n> + Note that <literal>ONLY</literal> option specified in\n\nadd \"the\" to say: \"the ONLY\"\n\n> + <command>SELECT</command>, <command>UPDATE</command>,\n> + <command>DELETE</command> or <command>TRUNCATE</command>\n> + has no effect when accessing or modifyung the remote table.\n\nmodifying\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 22 Apr 2021 03:56:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 22, 2021 at 12:06 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2021/04/22 9:39, Bharath Rupireddy wrote:\n> > One comment on truncate_foreign_table_docs_v1.patch:\n> > 1) I think it is \"to be truncated\"\n> > + <literal>rels</literal> is a list of <structname>Relation</structname>\n> > + data structures for each foreign table to truncated.\n>\n> Fixed. Thanks!\n>\n> > How about a slightly changed phrasing like below?\n> > + <literal>rels</literal> is a list of <structname>Relation</structname>\n> > + data structures of foreign tables to truncate.\n> Either works at least for me. If you think that this phrasing is\n> more precise or better, I'm ok with that and will update the patch again.\n\nIMO, \"rels is a list of Relation data structures of foreign tables to\ntruncate.\" looks better.\n\n> >>> 3) How about adding an extra para(after below para in\n> >>> postgres_fdw.sgml) on WHY we don't push \"ONLY\" to foreign tables while\n> >>> truncating? We could add to the same para for other options if at all\n> >>> we don't choose to push them.\n> >>> <command>DELETE</command>, or <command>TRUNCATE</command>.\n> >>> (Of course, the remote user you have specified in your user mapping must\n> >>> have privileges to do these things.)\n> >>\n> >> I agree to document the behavior that ONLY option is always ignored\n> >> for foreign tables. But I'm not sure if we can document WHY.\n> >> Because I could not find the past discussion about why ONLY option is\n> >> ignored on SELECT, etc... Maybe it's enough to document the behavior?\n> >\n> > +1 to specify in the documentation about ONLY option is always\n> > ignored.\n>\n> Added.\n>\n> > But can we specify the WHY part within deparseTruncateSql, it\n> > will be there for developer reference? I feel it's better if this\n> > change goes with truncate_foreign_table_dont_pass_only_clause_v2.patch\n>\n> I added this information into fdwhandler.sgml because the developers\n> usually read fdwhandler.sgml.\n\nThanks!\n\n+ <para>\n+ Note that information about <literal>ONLY</literal> options specified\n+ in the original <command>TRUNCATE</command> command is not passed to\n\nI think it is not \"information about\", no? We just don't pass ONLY\noption instead we skip it. IMO, we can say \"Note that the ONLY option\nspecified with a foreign table in the original TRUNCATE command is\nskipped and not passed to ExecForeignTruncate.\"\n\n+ <function>ExecForeignTruncate</function>. This is the same behavior as\n+ for the callback functions for <command>SELECT</command>,\n+ <command>UPDATE</command> and <command>DELETE</command> on\n+ a foreign table.\n\nHow about \"This behaviour is similar to the callback functions of\nSELECT, UPDATE, DELETE on a foreign table\"?\n\n> >>> 4) Isn't it better to mention the \"ONLY\" option is not pushed to remote\n> >>> -- truncate with ONLY clause\n> >>> TRUNCATE ONLY tru_ftable_parent;\n> >>>\n> >>> TRUNCATE ONLY tru_ftable; -- truncate both parent and child\n> >>> SELECT count(*) FROM tru_ftable; -- 0\n>\n> I added the comment.\n\nLGTM.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 16:57:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 22, 2021 at 2:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 03:36:25PM +0900, Fujii Masao wrote:\n> > diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml\n> > index 553524553b..69aa66e73e 100644\n> > --- a/doc/src/sgml/fdwhandler.sgml\n> > +++ b/doc/src/sgml/fdwhandler.sgml\n> > @@ -1076,27 +1076,25 @@ ExecForeignTruncate(List *rels,\n> > bool restart_seqs);\n> > <para>\n> > - <literal>behavior</literal> defines how foreign tables should\n> > - be truncated, using as possible values <literal>DROP_RESTRICT</literal>,\n> > - which means that <literal>RESTRICT</literal> option is specified,\n> > - and <literal>DROP_CASCADE</literal>, which means that\n> > - <literal>CASCADE</literal> option is specified, in\n> > - <command>TRUNCATE</command> command.\n> > + <literal>behavior</literal> is either <literal>DROP_RESTRICT</literal>\n> > + or <literal>DROP_CASCADE</literal>, which indicates that the\n> > + <literal>RESTRICT</literal> or <literal>CASCADE</literal> option was\n> > + requested in the original <command>TRUNCATE</command> command,\n> > + respectively.\n>\n> Now that I reread this, I would change \"which indicates\" to \"indicating\".\n\n+1.\n\n> > - <literal>restart_seqs</literal> is set to <literal>true</literal>\n> > - if <literal>RESTART IDENTITY</literal> option is specified in\n> > - <command>TRUNCATE</command> command. It is <literal>false</literal>\n> > - if <literal>CONTINUE IDENTITY</literal> option is specified.\n> > + If <literal>restart_seqs</literal> is <literal>true</literal>,\n> > + the original <command>TRUNCATE</command> command requested the\n> > + <literal>RESTART IDENTITY</literal> option, otherwise\n> > + <literal>CONTINUE IDENTITY</literal> option.\n>\n> should it say \"specified\" instead of requested ?\n> Or should it say \"requested the RESTART IDENTITY behavior\" ?\n>\n> Also, I think it should say \"..otherwise, the CONTINUE IDENTITY behavior was\n> requested\".\n\nThe original TRUNCATE document uses this - \"When RESTART IDENTITY is specified\"\n\nIMO the following looks better: \"If restart_seqs is true, RESTART\nIDENTITY was specified in the original TRUNCATE command, otherwise\nCONTINUE IDENTITY was specified.\"\n\n> > +++ b/doc/src/sgml/ref/truncate.sgml\n> > @@ -173,7 +173,7 @@ TRUNCATE [ TABLE ] [ ONLY ] <replaceable class=\"parameter\">name</replaceable> [\n> >\n> > <para>\n> > <command>TRUNCATE</command> can be used for foreign tables if\n> > - the foreign data wrapper supports, for instance,\n> > + supported by the foreign data wrapper, for instance,\n> > see <xref linkend=\"postgres-fdw\"/>.\n>\n> what does \"for instance\" mean here? I think it should be removed.\n\n+1.\n\n> > +++ b/doc/src/sgml/fdwhandler.sgml\n> > @@ -1111,6 +1099,15 @@ ExecForeignTruncate(List *rels, List *rels_extra,\n> > if <literal>CONTINUE IDENTITY</literal> option is specified.\n> > </para>\n> >\n> > + <para>\n> > + Note that information about <literal>ONLY</literal> options specified\n> > + in the original <command>TRUNCATE</command> command is not passed to\n> > + <function>ExecForeignTruncate</function>. This is the same behavior as\n> > + for the callback functions for <command>SELECT</command>,\n> > + <command>UPDATE</command> and <command>DELETE</command> on\n>\n> There's an extra space before DELETE\n\nGood catch! Extra space after \"and\" and before \"<command>\".\n\n> > diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> > index 5320accf6f..d03731b7d4 100644\n> > --- a/doc/src/sgml/postgres-fdw.sgml\n> > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > @@ -69,6 +69,13 @@\n> > have privileges to do these things.)\n> > </para>\n> >\n> > + <para>\n> > + Note that <literal>ONLY</literal> option specified in\n>\n> add \"the\" to say: \"the ONLY\"\n\n+1.\n\n> > + <command>SELECT</command>, <command>UPDATE</command>,\n> > + <command>DELETE</command> or <command>TRUNCATE</command>\n> > + has no effect when accessing or modifyung the remote table.\n>\n> modifying\n\nGood catch!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 17:09:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 22, 2021 at 05:09:02PM +0530, Bharath Rupireddy wrote:\n> > should it say \"specified\" instead of requested ?\n> > Or should it say \"requested the RESTART IDENTITY behavior\" ?\n> >\n> > Also, I think it should say \"..otherwise, the CONTINUE IDENTITY behavior was\n> > requested\".\n> \n> The original TRUNCATE document uses this - \"When RESTART IDENTITY is specified\"\n> \n> IMO the following looks better: \"If restart_seqs is true, RESTART\n> IDENTITY was specified in the original TRUNCATE command, otherwise\n> CONTINUE IDENTITY was specified.\"\n\nThis suggests that one of the two options was \"specified\", but the user maybe\ndidn't specify either, which is why we used the \"behavior\" language - if\nneither is \"specified\" then the default behavior is what was \"requested\".\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 22 Apr 2021 09:06:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 22, 2021 at 4:39 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Apr 22, 2021 at 2:26 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >\n> > On Thu, Apr 22, 2021 at 03:36:25PM +0900, Fujii Masao wrote:\n> > > diff --git a/doc/src/sgml/fdwhandler.sgml\n> b/doc/src/sgml/fdwhandler.sgml\n> > > index 553524553b..69aa66e73e 100644\n> > > --- a/doc/src/sgml/fdwhandler.sgml\n> > > +++ b/doc/src/sgml/fdwhandler.sgml\n> > > @@ -1076,27 +1076,25 @@ ExecForeignTruncate(List *rels,\n> > > bool restart_seqs);\n> > > <para>\n> > > - <literal>behavior</literal> defines how foreign tables should\n> > > - be truncated, using as possible values\n> <literal>DROP_RESTRICT</literal>,\n> > > - which means that <literal>RESTRICT</literal> option is specified,\n> > > - and <literal>DROP_CASCADE</literal>, which means that\n> > > - <literal>CASCADE</literal> option is specified, in\n> > > - <command>TRUNCATE</command> command.\n> > > + <literal>behavior</literal> is either\n> <literal>DROP_RESTRICT</literal>\n> > > + or <literal>DROP_CASCADE</literal>, which indicates that the\n> > > + <literal>RESTRICT</literal> or <literal>CASCADE</literal> option\n> was\n> > > + requested in the original <command>TRUNCATE</command> command,\n> > > + respectively.\n> >\n> > Now that I reread this, I would change \"which indicates\" to \"indicating\".\n>\n> +1.\n>\n> > > - <literal>restart_seqs</literal> is set to <literal>true</literal>\n> > > - if <literal>RESTART IDENTITY</literal> option is specified in\n> > > - <command>TRUNCATE</command> command. It is\n> <literal>false</literal>\n> > > - if <literal>CONTINUE IDENTITY</literal> option is specified.\n> > > + If <literal>restart_seqs</literal> is <literal>true</literal>,\n> > > + the original <command>TRUNCATE</command> command requested the\n> > > + <literal>RESTART IDENTITY</literal> option, otherwise\n> > > + <literal>CONTINUE IDENTITY</literal> option.\n> >\n> > should it say \"specified\" instead of requested ?\n> > Or should it say \"requested the RESTART IDENTITY behavior\" ?\n> >\n> > Also, I think it should say \"..otherwise, the CONTINUE IDENTITY behavior\n> was\n> > requested\".\n>\n> The original TRUNCATE document uses this - \"When RESTART IDENTITY is\n> specified\"\n>\n> IMO the following looks better: \"If restart_seqs is true, RESTART\n> IDENTITY was specified in the original TRUNCATE command, otherwise\n> CONTINUE IDENTITY was specified.\"\n>\n> > > +++ b/doc/src/sgml/ref/truncate.sgml\n> > > @@ -173,7 +173,7 @@ TRUNCATE [ TABLE ] [ ONLY ] <replaceable\n> class=\"parameter\">name</replaceable> [\n> > >\n> > > <para>\n> > > <command>TRUNCATE</command> can be used for foreign tables if\n> > > - the foreign data wrapper supports, for instance,\n> > > + supported by the foreign data wrapper, for instance,\n> > > see <xref linkend=\"postgres-fdw\"/>.\n> >\n> > what does \"for instance\" mean here? I think it should be removed.\n>\n> +1.\n>\n> > > +++ b/doc/src/sgml/fdwhandler.sgml\n> > > @@ -1111,6 +1099,15 @@ ExecForeignTruncate(List *rels, List\n> *rels_extra,\n> > > if <literal>CONTINUE IDENTITY</literal> option is specified.\n> > > </para>\n> > >\n> > > + <para>\n> > > + Note that information about <literal>ONLY</literal> options\n> specified\n> > > + in the original <command>TRUNCATE</command> command is not\n> passed to\n> > > + <function>ExecForeignTruncate</function>. This is the same\n> behavior as\n> > > + for the callback functions for <command>SELECT</command>,\n> > > + <command>UPDATE</command> and <command>DELETE</command> on\n> >\n> > There's an extra space before DELETE\n>\n> Good catch! Extra space after \"and\" and before \"<command>\".\n>\n> > > diff --git a/doc/src/sgml/postgres-fdw.sgml\n> b/doc/src/sgml/postgres-fdw.sgml\n> > > index 5320accf6f..d03731b7d4 100644\n> > > --- a/doc/src/sgml/postgres-fdw.sgml\n> > > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > > @@ -69,6 +69,13 @@\n> > > have privileges to do these things.)\n> > > </para>\n> > >\n> > > + <para>\n> > > + Note that <literal>ONLY</literal> option specified in\n> >\n> > add \"the\" to say: \"the ONLY\"\n>\n> +1.\n>\n\nSince 'the only option' is legitimate English phrase, I think the following\nwould be clearer:\n\nNote that the option <literal>ONLY</literal> ...\n\nCheers\n\n\n>\n> > > + <command>SELECT</command>, <command>UPDATE</command>,\n> > > + <command>DELETE</command> or <command>TRUNCATE</command>\n> > > + has no effect when accessing or modifyung the remote table.\n> >\n> > modifying\n>\n> Good catch!\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Thu, Apr 22, 2021 at 4:39 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Apr 22, 2021 at 2:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 03:36:25PM +0900, Fujii Masao wrote:\n> > diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml\n> > index 553524553b..69aa66e73e 100644\n> > --- a/doc/src/sgml/fdwhandler.sgml\n> > +++ b/doc/src/sgml/fdwhandler.sgml\n> > @@ -1076,27 +1076,25 @@ ExecForeignTruncate(List *rels,\n> >                      bool restart_seqs);\n> >      <para>\n> > -     <literal>behavior</literal> defines how foreign tables should\n> > -     be truncated, using as possible values <literal>DROP_RESTRICT</literal>,\n> > -     which means that <literal>RESTRICT</literal> option is specified,\n> > -     and <literal>DROP_CASCADE</literal>, which means that\n> > -     <literal>CASCADE</literal> option is specified, in\n> > -     <command>TRUNCATE</command> command.\n> > +     <literal>behavior</literal> is either <literal>DROP_RESTRICT</literal>\n> > +     or <literal>DROP_CASCADE</literal>, which indicates that the\n> > +     <literal>RESTRICT</literal> or <literal>CASCADE</literal> option was\n> > +     requested in the original <command>TRUNCATE</command> command,\n> > +     respectively.\n>\n> Now that I reread this, I would change \"which indicates\" to \"indicating\".\n\n+1.\n\n> > -     <literal>restart_seqs</literal> is set to <literal>true</literal>\n> > -     if <literal>RESTART IDENTITY</literal> option is specified in\n> > -     <command>TRUNCATE</command> command.  It is <literal>false</literal>\n> > -     if <literal>CONTINUE IDENTITY</literal> option is specified.\n> > +     If <literal>restart_seqs</literal> is <literal>true</literal>,\n> > +     the original <command>TRUNCATE</command> command requested the\n> > +     <literal>RESTART IDENTITY</literal> option, otherwise\n> > +     <literal>CONTINUE IDENTITY</literal> option.\n>\n> should it say \"specified\" instead of requested ?\n> Or should it say \"requested the RESTART IDENTITY behavior\" ?\n>\n> Also, I think it should say \"..otherwise, the CONTINUE IDENTITY behavior was\n> requested\".\n\nThe original TRUNCATE document uses this - \"When RESTART IDENTITY is specified\"\n\nIMO the following looks better: \"If restart_seqs is true, RESTART\nIDENTITY was specified in the original TRUNCATE command, otherwise\nCONTINUE IDENTITY was specified.\"\n\n> > +++ b/doc/src/sgml/ref/truncate.sgml\n> > @@ -173,7 +173,7 @@ TRUNCATE [ TABLE ] [ ONLY ] <replaceable class=\"parameter\">name</replaceable> [\n> >\n> >    <para>\n> >     <command>TRUNCATE</command> can be used for foreign tables if\n> > -   the foreign data wrapper supports, for instance,\n> > +   supported by the foreign data wrapper, for instance,\n> >     see <xref linkend=\"postgres-fdw\"/>.\n>\n> what does \"for instance\" mean here?  I think it should be removed.\n\n+1.\n\n> > +++ b/doc/src/sgml/fdwhandler.sgml\n> > @@ -1111,6 +1099,15 @@ ExecForeignTruncate(List *rels, List *rels_extra,\n> >       if <literal>CONTINUE IDENTITY</literal> option is specified.\n> >      </para>\n> >\n> > +    <para>\n> > +     Note that information about <literal>ONLY</literal> options specified\n> > +     in the original <command>TRUNCATE</command> command is not passed to\n> > +     <function>ExecForeignTruncate</function>.  This is the same behavior as\n> > +     for the callback functions for <command>SELECT</command>,\n> > +     <command>UPDATE</command> and  <command>DELETE</command> on\n>\n> There's an extra space before DELETE\n\nGood catch! Extra space after \"and\" and before \"<command>\".\n\n> > diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n> > index 5320accf6f..d03731b7d4 100644\n> > --- a/doc/src/sgml/postgres-fdw.sgml\n> > +++ b/doc/src/sgml/postgres-fdw.sgml\n> > @@ -69,6 +69,13 @@\n> >    have privileges to do these things.)\n> >   </para>\n> >\n> > + <para>\n> > +  Note that <literal>ONLY</literal> option specified in\n>\n> add \"the\" to say: \"the ONLY\"\n\n+1.Since 'the only option' is legitimate English phrase, I think the following would be clearer:Note that the option <literal>ONLY</literal> ...Cheers \n\n> > +  <command>SELECT</command>, <command>UPDATE</command>,\n> > +  <command>DELETE</command> or <command>TRUNCATE</command>\n> > +  has no effect when accessing or modifyung the remote table.\n>\n> modifying\n\nGood catch!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Apr 2021 07:41:06 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Thu, Apr 22, 2021 at 07:41:06AM -0700, Zhihong Yu wrote:\n> > > > + Note that <literal>ONLY</literal> option specified in\n> > >\n> > > add \"the\" to say: \"the ONLY\"\n> >\n> > +1.\n> \n> Since 'the only option' is legitimate English phrase, I think the following\n> would be clearer:\n> \n> Note that the option <literal>ONLY</literal> ...\n\nI think the ONLY option is better, more clear, and more consistent with the\nrest of the documentation.\n\nThere are only ~5 places where we say \"the option >OPTION<\":\n| git grep 'the option <' doc/src/sgml/\n\nAnd at least 150 places where we say \"The >OPTION< option\" (I'm sure there are\nsome more which are split across lines).\n| git grep -E 'the <([^>]*)>[^<]*</\\1> option' doc/src/sgml/ |wc -l\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 22 Apr 2021 12:34:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/22 17:56, Justin Pryzby wrote:\n> On Thu, Apr 22, 2021 at 03:36:25PM +0900, Fujii Masao wrote:\n>> diff --git a/doc/src/sgml/fdwhandler.sgml b/doc/src/sgml/fdwhandler.sgml\n>> index 553524553b..69aa66e73e 100644\n>> --- a/doc/src/sgml/fdwhandler.sgml\n>> +++ b/doc/src/sgml/fdwhandler.sgml\n>> @@ -1076,27 +1076,25 @@ ExecForeignTruncate(List *rels,\n>> bool restart_seqs);\n>> <para>\n>> - <literal>behavior</literal> defines how foreign tables should\n>> - be truncated, using as possible values <literal>DROP_RESTRICT</literal>,\n>> - which means that <literal>RESTRICT</literal> option is specified,\n>> - and <literal>DROP_CASCADE</literal>, which means that\n>> - <literal>CASCADE</literal> option is specified, in\n>> - <command>TRUNCATE</command> command.\n>> + <literal>behavior</literal> is either <literal>DROP_RESTRICT</literal>\n>> + or <literal>DROP_CASCADE</literal>, which indicates that the\n>> + <literal>RESTRICT</literal> or <literal>CASCADE</literal> option was\n>> + requested in the original <command>TRUNCATE</command> command,\n>> + respectively.\n> \n> Now that I reread this, I would change \"which indicates\" to \"indicating\".\n\nFixed. Thanks for reviewing the patch!\nI will post the updated version of the patch later.\n\n\n> \n>> - <literal>restart_seqs</literal> is set to <literal>true</literal>\n>> - if <literal>RESTART IDENTITY</literal> option is specified in\n>> - <command>TRUNCATE</command> command. It is <literal>false</literal>\n>> - if <literal>CONTINUE IDENTITY</literal> option is specified.\n>> + If <literal>restart_seqs</literal> is <literal>true</literal>,\n>> + the original <command>TRUNCATE</command> command requested the\n>> + <literal>RESTART IDENTITY</literal> option, otherwise\n>> + <literal>CONTINUE IDENTITY</literal> option.\n> \n> should it say \"specified\" instead of requested ?\n> Or should it say \"requested the RESTART IDENTITY behavior\" ?\n> \n> Also, I think it should say \"..otherwise, the CONTINUE IDENTITY behavior was\n> requested\".\n\nFixed.\n\n \n>> +++ b/doc/src/sgml/ref/truncate.sgml\n>> @@ -173,7 +173,7 @@ TRUNCATE [ TABLE ] [ ONLY ] <replaceable class=\"parameter\">name</replaceable> [\n>> \n>> <para>\n>> <command>TRUNCATE</command> can be used for foreign tables if\n>> - the foreign data wrapper supports, for instance,\n>> + supported by the foreign data wrapper, for instance,\n>> see <xref linkend=\"postgres-fdw\"/>.\n> \n> what does \"for instance\" mean here? I think it should be removed.\n\nRemoved.\n\n\n> \n>> +++ b/doc/src/sgml/fdwhandler.sgml\n>> @@ -1111,6 +1099,15 @@ ExecForeignTruncate(List *rels, List *rels_extra,\n>> if <literal>CONTINUE IDENTITY</literal> option is specified.\n>> </para>\n>> \n>> + <para>\n>> + Note that information about <literal>ONLY</literal> options specified\n>> + in the original <command>TRUNCATE</command> command is not passed to\n>> + <function>ExecForeignTruncate</function>. This is the same behavior as\n>> + for the callback functions for <command>SELECT</command>,\n>> + <command>UPDATE</command> and <command>DELETE</command> on\n> \n> There's an extra space before DELETE\n\nFixed.\n\n\n> \n>> diff --git a/doc/src/sgml/postgres-fdw.sgml b/doc/src/sgml/postgres-fdw.sgml\n>> index 5320accf6f..d03731b7d4 100644\n>> --- a/doc/src/sgml/postgres-fdw.sgml\n>> +++ b/doc/src/sgml/postgres-fdw.sgml\n>> @@ -69,6 +69,13 @@\n>> have privileges to do these things.)\n>> </para>\n>> \n>> + <para>\n>> + Note that <literal>ONLY</literal> option specified in\n> \n> add \"the\" to say: \"the ONLY\"\n\nFixed.\n\n\n> \n>> + <command>SELECT</command>, <command>UPDATE</command>,\n>> + <command>DELETE</command> or <command>TRUNCATE</command>\n>> + has no effect when accessing or modifyung the remote table.\n> \n> modifying\n\nFixed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 23 Apr 2021 17:05:45 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/22 20:27, Bharath Rupireddy wrote:\n> On Thu, Apr 22, 2021 at 12:06 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/04/22 9:39, Bharath Rupireddy wrote:\n>>> One comment on truncate_foreign_table_docs_v1.patch:\n>>> 1) I think it is \"to be truncated\"\n>>> + <literal>rels</literal> is a list of <structname>Relation</structname>\n>>> + data structures for each foreign table to truncated.\n>>\n>> Fixed. Thanks!\n>>\n>>> How about a slightly changed phrasing like below?\n>>> + <literal>rels</literal> is a list of <structname>Relation</structname>\n>>> + data structures of foreign tables to truncate.\n>> Either works at least for me. If you think that this phrasing is\n>> more precise or better, I'm ok with that and will update the patch again.\n> \n> IMO, \"rels is a list of Relation data structures of foreign tables to\n> truncate.\" looks better.\n\nFixed.\n\nThanks for reviewing the patches.\nAttached are the updated versions of the patches.\nThese patches include the fixes pointed by Justin.\n\n\n> \n>>>>> 3) How about adding an extra para(after below para in\n>>>>> postgres_fdw.sgml) on WHY we don't push \"ONLY\" to foreign tables while\n>>>>> truncating? We could add to the same para for other options if at all\n>>>>> we don't choose to push them.\n>>>>> <command>DELETE</command>, or <command>TRUNCATE</command>.\n>>>>> (Of course, the remote user you have specified in your user mapping must\n>>>>> have privileges to do these things.)\n>>>>\n>>>> I agree to document the behavior that ONLY option is always ignored\n>>>> for foreign tables. But I'm not sure if we can document WHY.\n>>>> Because I could not find the past discussion about why ONLY option is\n>>>> ignored on SELECT, etc... Maybe it's enough to document the behavior?\n>>>\n>>> +1 to specify in the documentation about ONLY option is always\n>>> ignored.\n>>\n>> Added.\n>>\n>>> But can we specify the WHY part within deparseTruncateSql, it\n>>> will be there for developer reference? I feel it's better if this\n>>> change goes with truncate_foreign_table_dont_pass_only_clause_v2.patch\n>>\n>> I added this information into fdwhandler.sgml because the developers\n>> usually read fdwhandler.sgml.\n> \n> Thanks!\n> \n> + <para>\n> + Note that information about <literal>ONLY</literal> options specified\n> + in the original <command>TRUNCATE</command> command is not passed to\n> \n> I think it is not \"information about\", no? We just don't pass ONLY\n> option instead we skip it. IMO, we can say \"Note that the ONLY option\n> specified with a foreign table in the original TRUNCATE command is\n> skipped and not passed to ExecForeignTruncate.\"\n\nProbably I still fail to understand your point.\nBut if \"information about\" is confusing, I'm ok to\nremove that. Fixed.\n\n\n> \n> + <function>ExecForeignTruncate</function>. This is the same behavior as\n> + for the callback functions for <command>SELECT</command>,\n> + <command>UPDATE</command> and <command>DELETE</command> on\n> + a foreign table.\n> \n> How about \"This behaviour is similar to the callback functions of\n> SELECT, UPDATE, DELETE on a foreign table\"?\n\nFixed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 23 Apr 2021 17:09:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Fri, Apr 23, 2021 at 1:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > + <para>\n> > + Note that information about <literal>ONLY</literal> options specified\n> > + in the original <command>TRUNCATE</command> command is not passed to\n> >\n> > I think it is not \"information about\", no? We just don't pass ONLY\n> > option instead we skip it. IMO, we can say \"Note that the ONLY option\n> > specified with a foreign table in the original TRUNCATE command is\n> > skipped and not passed to ExecForeignTruncate.\"\n>\n> Probably I still fail to understand your point.\n> But if \"information about\" is confusing, I'm ok to\n> remove that. Fixed.\n\nA small typo in the docs patch: It is \"are not passed to\", instead of\n\"is not passed to\" since we used plural \"options\". \"Note that the ONLY\noptions specified in the original TRUNCATE command are not passed to\"\n\n+ Note that the <literal>ONLY</literal> options specified\n in the original <command>TRUNCATE</command> command is not passed to\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Apr 2021 16:26:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On 2021/04/23 19:56, Bharath Rupireddy wrote:\n> On Fri, Apr 23, 2021 at 1:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>> + <para>\n>>> + Note that information about <literal>ONLY</literal> options specified\n>>> + in the original <command>TRUNCATE</command> command is not passed to\n>>>\n>>> I think it is not \"information about\", no? We just don't pass ONLY\n>>> option instead we skip it. IMO, we can say \"Note that the ONLY option\n>>> specified with a foreign table in the original TRUNCATE command is\n>>> skipped and not passed to ExecForeignTruncate.\"\n>>\n>> Probably I still fail to understand your point.\n>> But if \"information about\" is confusing, I'm ok to\n>> remove that. Fixed.\n> \n> A small typo in the docs patch: It is \"are not passed to\", instead of\n> \"is not passed to\" since we used plural \"options\". \"Note that the ONLY\n> options specified in the original TRUNCATE command are not passed to\"\n> \n> + Note that the <literal>ONLY</literal> options specified\n> in the original <command>TRUNCATE</command> command is not passed to\n\nThanks for the review! I fixed this.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 24 Apr 2021 01:19:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Fri, Apr 23, 2021 at 9:50 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Thanks for the review! I fixed this.\n\nThanks for the updated patches.\n\nIn docs v4 patch, I think we can combine below two lines into a single line:\n+ supported by the foreign data wrapper,\n see <xref linkend=\"postgres-fdw\"/>.\n\nOther than the above minor change, both patches look good to me, I\nhave no further comments.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 26 Apr 2021 10:22:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/26 13:52, Bharath Rupireddy wrote:\n> On Fri, Apr 23, 2021 at 9:50 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Thanks for the review! I fixed this.\n> \n> Thanks for the updated patches.\n> \n> In docs v4 patch, I think we can combine below two lines into a single line:\n> + supported by the foreign data wrapper,\n> see <xref linkend=\"postgres-fdw\"/>.\n\nYou mean \"supported by the foreign data wrapper <xref linkend=\"postgres-fdw\"/>\"?\n\nI was thinking that it's better to separate them because postgres_fdw\nis just an example of the foreign data wrapper supporting TRUNCATE.\nThis makes me think again; isn't it better to add \"for example\" or\n\"for instance\" into after \"data wrapper\"? That is,\n\n <command>TRUNCATE</command> can be used for foreign tables if\n supported by the foreign data wrapper, for instance,\n see <xref linkend=\"postgres-fdw\"/>.\n\n\n> Other than the above minor change, both patches look good to me, I\n> have no further comments.\n\nThanks! I pushed the patch truncate_foreign_table_dont_pass_only_clause_xx.patch, at first.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 27 Apr 2021 14:49:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:19 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> > In docs v4 patch, I think we can combine below two lines into a single line:\n> > + supported by the foreign data wrapper,\n> > see <xref linkend=\"postgres-fdw\"/>.\n>\n> You mean \"supported by the foreign data wrapper <xref linkend=\"postgres-fdw\"/>\"?\n>\n> I was thinking that it's better to separate them because postgres_fdw\n> is just an example of the foreign data wrapper supporting TRUNCATE.\n> This makes me think again; isn't it better to add \"for example\" or\n> \"for instance\" into after \"data wrapper\"? That is,\n>\n> <command>TRUNCATE</command> can be used for foreign tables if\n> supported by the foreign data wrapper, for instance,\n> see <xref linkend=\"postgres-fdw\"/>.\n\n+1.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Apr 2021 11:32:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" }, { "msg_contents": "\n\nOn 2021/04/27 15:02, Bharath Rupireddy wrote:\n> On Tue, Apr 27, 2021 at 11:19 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>> In docs v4 patch, I think we can combine below two lines into a single line:\n>>> + supported by the foreign data wrapper,\n>>> see <xref linkend=\"postgres-fdw\"/>.\n>>\n>> You mean \"supported by the foreign data wrapper <xref linkend=\"postgres-fdw\"/>\"?\n>>\n>> I was thinking that it's better to separate them because postgres_fdw\n>> is just an example of the foreign data wrapper supporting TRUNCATE.\n>> This makes me think again; isn't it better to add \"for example\" or\n>> \"for instance\" into after \"data wrapper\"? That is,\n>>\n>> <command>TRUNCATE</command> can be used for foreign tables if\n>> supported by the foreign data wrapper, for instance,\n>> see <xref linkend=\"postgres-fdw\"/>.\n> \n> +1.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 27 Apr 2021 18:40:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: TRUNCATE on foreign table" } ]
[ { "msg_contents": "Why is GlobalVisIsRemovableFullXid() not named\nGlobalVisCheckRemovableFullXid() instead? ISTM that that name makes\nmuch more sense, since it is what I'd expect for a function that is\nthe \"Full XID equivalent\" of GlobalVisCheckRemovableXid().\n\nNote also that GlobalVisIsRemovableFullXid() is the only symbol name\nmatching \"GlobalVisIsRemovable*\".\n\nHave I missed something?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 6 Feb 2021 12:27:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "Hi,\n\nOn 2021-02-06 12:27:30 -0800, Peter Geoghegan wrote:\n> Why is GlobalVisIsRemovableFullXid() not named\n> GlobalVisCheckRemovableFullXid() instead? ISTM that that name makes\n> much more sense, since it is what I'd expect for a function that is\n> the \"Full XID equivalent\" of GlobalVisCheckRemovableXid().\n> \n> Note also that GlobalVisIsRemovableFullXid() is the only symbol name\n> matching \"GlobalVisIsRemovable*\".\n\nLooks like a mistake on my part... Probably a rename regex that somehow\nwent wrong - I went back and forth on those names way too many\ntimes. Want me to push the fix?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Feb 2021 19:40:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Sat, Feb 6, 2021 at 7:40 PM Andres Freund <andres@anarazel.de> wrote:\n> Looks like a mistake on my part... Probably a rename regex that somehow\n> went wrong - I went back and forth on those names way too many\n> times. Want me to push the fix?\n\nYes, please do. I could do it myself, but better that you do it\nyourself, just in case.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 6 Feb 2021 19:41:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Sat, Feb 6, 2021 at 7:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Yes, please do. I could do it myself, but better that you do it\n> yourself, just in case.\n\nI went ahead and fixed it myself.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 7 Feb 2021 10:12:03 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Sat, Feb 6, 2021 at 7:40 PM Andres Freund <andres@anarazel.de> wrote:\n> Looks like a mistake on my part... Probably a rename regex that somehow\n> went wrong - I went back and forth on those names way too many\n> times. Want me to push the fix?\n\nSpotted another one: Shouldn't ReadNextFullTransactionId() actually be\ncalled ReadNewFullTransactionId()? Actually, the inverse approach\nlooks like it produces fewer inconsistencies: you could instead rename\nReadNewTransactionId() to ReadNextTransactionId().\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Feb 2021 13:01:57 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Mon, Feb 15, 2021 at 10:02 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Feb 6, 2021 at 7:40 PM Andres Freund <andres@anarazel.de> wrote:\n> > Looks like a mistake on my part... Probably a rename regex that somehow\n> > went wrong - I went back and forth on those names way too many\n> > times. Want me to push the fix?\n>\n> Spotted another one: Shouldn't ReadNextFullTransactionId() actually be\n> called ReadNewFullTransactionId()? Actually, the inverse approach\n> looks like it produces fewer inconsistencies: you could instead rename\n> ReadNewTransactionId() to ReadNextTransactionId().\n\nI prefer \"next\", because that's in the name of the variable it reads,\nand the variable name seemed to me to have a more obvious meaning.\nThat's why I went for that name in commit 2fc7af5e966. I do agree\nthat it's slightly strange that the 32 and 64 bit versions differ\nhere. I'd vote for renaming the 32 bit version to match...\n\n\n", "msg_date": "Mon, 15 Feb 2021 11:07:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Sun, Feb 14, 2021 at 2:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I prefer \"next\", because that's in the name of the variable it reads,\n> and the variable name seemed to me to have a more obvious meaning.\n> That's why I went for that name in commit 2fc7af5e966. I do agree\n> that it's slightly strange that the 32 and 64 bit versions differ\n> here. I'd vote for renaming the 32 bit version to match...\n\nI was just going to say the same thing myself.\n\nPlease do the honors if you have time...\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Feb 2021 14:33:07 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Mon, Feb 15, 2021 at 11:33 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Feb 14, 2021 at 2:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I prefer \"next\", because that's in the name of the variable it reads,\n> > and the variable name seemed to me to have a more obvious meaning.\n> > That's why I went for that name in commit 2fc7af5e966. I do agree\n> > that it's slightly strange that the 32 and 64 bit versions differ\n> > here. I'd vote for renaming the 32 bit version to match...\n>\n> I was just going to say the same thing myself.\n>\n> Please do the honors if you have time...\n\nDone.\n\n\n", "msg_date": "Mon, 15 Feb 2021 13:20:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" }, { "msg_contents": "On Sun, Feb 14, 2021 at 4:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Done.\n\nThanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Feb 2021 17:14:47 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: GlobalVisIsRemovableFullXid() vs GlobalVisCheckRemovableXid()" } ]
[ { "msg_contents": "Hi,\n\nI just noticed that if you load a file using psql:\n\n\\copy <table> from <local file>\n\nit sends every line as a separate FE/BE protocol CopyData packet. That's \npretty wasteful if the lines are narrow. The overhead of each CopyData \npacket is 5 bytes.\n\nTo demonstrate, I generated a simple test file with the string \"foobar\" \nrepeated 10 million times:\n\n$ perl -le 'for (1..10000000) { print \"foobar\" }' > /tmp/testdata\n\nand loaded that into a temp table with psql:\n\ncreate temporary table copytest (t text) on commit delete rows;\n\\copy copytest from '/tmp/testdata';\n\nI repeated and timed the \\copy a few times; it takes about about 3 \nseconds on my laptop:\n\npostgres=# \\copy copytest from '/tmp/testdata';\nCOPY 10000000\nTime: 3039.625 ms (00:03.040)\n\nWireshark says that that involved about 120 MB of network traffic. The \nsize of the file on disk is only 70 MB.\n\nThe attached patch modifies psql so that it buffers up 8 kB of data into \neach CopyData message, instead of sending one per line. That makes the \noperation faster:\n\npostgres=# \\copy copytest from '/tmp/testdata';\nCOPY 10000000\nTime: 2490.268 ms (00:02.490)\n\nAnd wireshark confirms that there's now only a bit over 70 MB of network \ntraffic.\n\nI'll add this to the next commitfest. There's similar inefficiency in \nthe server side in COPY TO, but I'll leave that for another patch.\n\n- Heikki", "msg_date": "Sun, 7 Feb 2021 00:13:38 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "psql \\copy from sends a lot of packets" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> I just noticed that if you load a file using psql:\n> it sends every line as a separate FE/BE protocol CopyData packet.\n> ...\n> I'll add this to the next commitfest. There's similar inefficiency in \n> the server side in COPY TO, but I'll leave that for another patch.\n\nThe FE/BE protocol documentation is pretty explicit about this:\n\n Copy-in mode (data transfer to the server) is initiated when the\n backend executes a COPY FROM STDIN SQL statement. The backend sends a\n CopyInResponse message to the frontend. The frontend should then send\n zero or more CopyData messages, forming a stream of input data. (The\n message boundaries are not required to have anything to do with row\n boundaries, although that is often a reasonable choice.)\n ...\n Copy-out mode (data transfer from the server) is initiated when the\n backend executes a COPY TO STDOUT SQL statement. The backend sends a\n CopyOutResponse message to the frontend, followed by zero or more\n CopyData messages (always one per row), followed by CopyDone.\n\nSo while changing psql isn't so much a problem, changing the server\nis a wire protocol break. Maybe we should do it anyway, but I'm\nnot sure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Feb 2021 17:23:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql \\copy from sends a lot of packets" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThe patch was marked as the one that needs review and doesn't currently have\r\na reviewer, so I decided to take a look. The patch was tested on MacOS against\r\nmaster `e0271d5f`. It works fine and doesn't seem to contradict the current\r\ndocumentation.\r\n\r\nThe future COPY TO patch may require some changes in the docs, as Tom pointed\r\nout. I also wonder if it may affect any 3rd party applications and if we care\r\nabout this, but I suggest we discuss this when and if a corresponding patch\r\nwill be proposed.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 13 Jul 2021 11:52:40 +0000", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: psql \\copy from sends a lot of packets" }, { "msg_contents": "On 13/07/2021 14:52, Aleksander Alekseev wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n> \n> The patch was marked as the one that needs review and doesn't currently have\n> a reviewer, so I decided to take a look. The patch was tested on MacOS against\n> master `e0271d5f`. It works fine and doesn't seem to contradict the current\n> documentation.\n\nThanks for the review! I read through it myself one more time and \nspotted one bug: in interactive mode, the prompt was printed twice in \nthe beginning of the operation. Fixed that, and pushed.\n\n- Heikki\n\n\n", "msg_date": "Wed, 14 Jul 2021 13:11:59 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: psql \\copy from sends a lot of packets" } ]
[ { "msg_contents": "Hi Hackers,\n\nI found a bug in the query rewriter. If a query that has a modifying\nCTE is re-written, the hasModifyingCTE flag is not getting set in the\nre-written query. This bug can result in the query being allowed to\nexecute in parallel-mode, which results in an error.\n\nI originally found the problem using INSERT (which doesn't actually\naffect the current Postgres code, as it doesn't support INSERT in\nparallel mode) but a colleague of mine (Hou, Zhijie) managed to\nreproduce it using SELECT as well (see example below), and helped to\nminimize the patch size.\n\nI've attached the patch with the suggested fix (reviewed by Amit Langote).\n\n\nThe following reproduces the issue (adapted from a test case in the\n\"with\" regression tests):\n\ndrop table if exists test_data1;\ncreate table test_data1(a int, b int) ;\ninsert into test_data1 select generate_series(1,1000), generate_series(1,1000);\nset force_parallel_mode=on;\nCREATE TEMP TABLE bug6051 AS\nselect i from generate_series(1,3) as i;\nSELECT * FROM bug6051;\nCREATE RULE bug6051_ins AS ON INSERT TO bug6051 DO INSTEAD select a as\ni from test_data1;\nWITH t1 AS ( DELETE FROM bug6051 RETURNING * ) INSERT INTO bug6051\nSELECT * FROM t1;\n\nproduces the error:\n\n ERROR: cannot assign XIDs during a parallel operation\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia", "msg_date": "Sun, 7 Feb 2021 09:29:15 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> I found a bug in the query rewriter. If a query that has a modifying\n> CTE is re-written, the hasModifyingCTE flag is not getting set in the\n> re-written query.\n\nUgh.\n\n> I've attached the patch with the suggested fix (reviewed by Amit Langote).\n\nI think either the bit about rule_action is unnecessary, or most of\nthe code immediately above this is wrong, because it's only updating\nflags in sub_action. Why do you think it's necessary to change\nrule_action in addition to sub_action?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Feb 2021 18:03:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "After poking around a bit more, I notice that the hasRecursive flag\nreally ought to get propagated as well, since that's also an attribute\nof the CTE list. That omission doesn't seem to have any ill effect\ntoday, since nothing in planning or execution looks at that flag, but\nsomeday it might. So what I think we should do is as attached.\n(I re-integrated your example into with.sql, too.)\n\nGiven the very limited time remaining before the release wrap, I'm\ngoing to go ahead and push this.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 06 Feb 2021 19:05:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "On Sun, Feb 7, 2021 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > I found a bug in the query rewriter. If a query that has a modifying\n> > CTE is re-written, the hasModifyingCTE flag is not getting set in the\n> > re-written query.\n>\n> Ugh.\n>\n> > I've attached the patch with the suggested fix (reviewed by Amit Langote).\n>\n> I think either the bit about rule_action is unnecessary, or most of\n> the code immediately above this is wrong, because it's only updating\n> flags in sub_action. Why do you think it's necessary to change\n> rule_action in addition to sub_action?\n>\n\nI believe that the bit about rule_action IS necessary, as it's needed\nfor the case of INSERT...SELECT, so that hasModifyingCTE is set on the\nrewritten INSERT (see comment above the call to\ngetInsertSelectQuery(), and the \"KLUDGE ALERT\" comment within that\nfunction).\n\nIn the current Postgres code, it doesn't let INSERT run in\nparallel-mode (only SELECT), but in the debugger you can clearly see\nthat for an INSERT with a subquery that uses a modifying CTE, the\nhasModifyingCTE flag is not getting set on the rewritten INSERT query\nby the query rewriter. As I've been working on parallel INSERT, I\nfound the issue first for INSERT (one test failure in the \"with\" tests\nwhen force_parallel_mode=regress).\n\nHere's some silly SQL (very similar to existing test case in the\n\"with\" tests) to reproduce the issue for INSERT (as I said, it won't\ngive an error like the SELECT case, as currently INSERT is not allowed\nin parallel-mode anyway, but the issue can be seen in the debugger):\n\nset force_parallel_mode=on;\nCREATE TABLE bug6051 AS\n select i from generate_series(1,3) as i;\nSELECT * FROM bug6051;\nCREATE TABLE bug6051_2 (i int);\nCREATE RULE bug6051_ins AS ON INSERT TO bug6051 DO INSTEAD\n INSERT INTO bug6051_2\n SELECT NEW.i;\nWITH t1 AS ( DELETE FROM bug6051 RETURNING * )\nINSERT INTO bug6051 SELECT * FROM t1;\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n", "msg_date": "Sun, 7 Feb 2021 23:26:40 +1100", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> On Sun, Feb 7, 2021 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think either the bit about rule_action is unnecessary, or most of\n>> the code immediately above this is wrong, because it's only updating\n>> flags in sub_action. Why do you think it's necessary to change\n>> rule_action in addition to sub_action?\n\n> I believe that the bit about rule_action IS necessary, as it's needed\n> for the case of INSERT...SELECT, so that hasModifyingCTE is set on the\n> rewritten INSERT (see comment above the call to\n> getInsertSelectQuery(), and the \"KLUDGE ALERT\" comment within that\n> function).\n\nHm. So after looking at this more, the problem is that the rewrite\nis producing something equivalent to\n\nINSERT INTO bug6051_2\n(WITH t1 AS (DELETE FROM bug6051 RETURNING *) SELECT * FROM t1);\n\nIf you try to do that directly, the parser will give you the raspberry:\n\nERROR: WITH clause containing a data-modifying statement must be at the top level\nLINE 2: (WITH t1 AS (DELETE FROM bug6051 RETURNING *) SELECT * FROM ...\n ^\n\nThe code throwing that error, in analyzeCTE(), explains\n\n /*\n * We disallow data-modifying WITH except at the top level of a query,\n * because it's not clear when such a modification should be executed.\n */\n\nThat semantic issue doesn't get any less pressing just because the query\nwas generated by rewrite. So I now think that what we have to do is\nthrow an error if we have a modifying CTE and sub_action is different\nfrom rule_action. Not quite sure how to phrase the error though.\n\nIn view of this, maybe the right thing is to disallow modifying CTEs\nin rule actions in the first place. I see we already do that for\nviews (i.e. ON SELECT rules), but they're not really any safer in\nother types of rules. Given that non-SELECT rules are an undertested\nlegacy thing, I'm not that excited about moving mountains to make\nthis case possible.\n\nAnyway, I think I'm going to go revert the patch I crammed in last night.\nThere's more here than meets the eye, and right before a release is no\ntime to be fooling with an issue that's been there for years.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Feb 2021 12:44:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "I wrote:\n> That semantic issue doesn't get any less pressing just because the query\n> was generated by rewrite. So I now think that what we have to do is\n> throw an error if we have a modifying CTE and sub_action is different\n> from rule_action. Not quite sure how to phrase the error though.\n\nAnother idea that'd avoid disallowing functionality is to try to attach\nthe CTEs to the rule_action not the sub_action. This'd require adjusting\nctelevelsup in appropriate parts of the parsetree when those are\ndifferent, so it seems like it'd be a pain. I remain unconvinced that\nit's worth it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Feb 2021 14:05:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> In view of this, maybe the right thing is to disallow modifying CTEs\n> in rule actions in the first place. I see we already do that for\n> views (i.e. ON SELECT rules), but they're not really any safer in\n> other types of rules.\n\nYou meant by views something like the following, didn't you?\n\npostgres=# create view myview as with t as (delete from b) select * from a;\nERROR: views must not contain data-modifying statements in WITH\n\nOTOH, the examples Greg-san showed do not contain CTE in the rule action, but in the query that the rule is applied to. So, I think the solution would be something different.\n\n\n> Given that non-SELECT rules are an undertested\n> legacy thing, I'm not that excited about moving mountains to make\n> this case possible.\n\n> That semantic issue doesn't get any less pressing just because the query\n> was generated by rewrite. So I now think that what we have to do is\n> throw an error if we have a modifying CTE and sub_action is different\n> from rule_action. Not quite sure how to phrase the error though.\n\nSo, how about just throwing an error when the original query (not the rule action) has a data-modifying CTE? The error message would be something like \"a query containing a data-modifying CTE cannot be executed because there is some rule applicable to the relation\". This may be overkill and too many regression tests might fail, so we may have to add some condition to determine if we error out.\n\nOr, I thought Greg-san's patch would suffice. What problem do you see in it?\n\nI couldn't imagine what \"mountains\" are. Could you tell me what's that?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Tue, 18 May 2021 03:59:21 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> Greg Nancarrow <gregn4422@gmail.com> writes:\n> > On Sun, Feb 7, 2021 at 10:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think either the bit about rule_action is unnecessary, or most of\n> >> the code immediately above this is wrong, because it's only updating\n> >> flags in sub_action. Why do you think it's necessary to change\n> >> rule_action in addition to sub_action?\n> \n> > I believe that the bit about rule_action IS necessary, as it's needed\n> > for the case of INSERT...SELECT, so that hasModifyingCTE is set on the\n> > rewritten INSERT (see comment above the call to\n> > getInsertSelectQuery(), and the \"KLUDGE ALERT\" comment within that\n> > function).\n> \n> Hm. So after looking at this more, the problem is that the rewrite is producing\n> something equivalent to\n> \n> INSERT INTO bug6051_2\n> (WITH t1 AS (DELETE FROM bug6051 RETURNING *) SELECT * FROM t1);\n> \n> If you try to do that directly, the parser will give you the raspberry:\n> \n> ERROR: WITH clause containing a data-modifying statement must be at the\n> top level LINE 2: (WITH t1 AS (DELETE FROM bug6051 RETURNING *) SELECT *\n> FROM ...\n> ^\n> \n> The code throwing that error, in analyzeCTE(), explains\n> \n> /*\n> * We disallow data-modifying WITH except at the top level of a query,\n> * because it's not clear when such a modification should be executed.\n> */\n> \n> That semantic issue doesn't get any less pressing just because the query was\n> generated by rewrite. So I now think that what we have to do is throw an error\n> if we have a modifying CTE and sub_action is different from rule_action. Not\n> quite sure how to phrase the error though.\n\nI am +1 for throwing an error if we have a modifying CTE and sub_action is different\nfrom rule_action. As we disallowed data-modifying CTEs which is not at the top level\nof a query, it will be safe and consistent to disallow the same case here.\n\nMaybe we can output the message like the following ?\n\"DO INSTEAD INSERT ... SELECT rules are not supported for INSERT contains data-modifying statements in WITH.\"\n\nBest regards,\nhouzj\n\n\n\n", "msg_date": "Thu, 20 May 2021 05:54:27 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> I think either the bit about rule_action is unnecessary, or most of\n> the code immediately above this is wrong, because it's only updating\n> flags in sub_action. Why do you think it's necessary to change\n> rule_action in addition to sub_action?\n\nFinally, I think I've understood what you meant. Yes, the current code seems to be wrong. rule_action is different from sub_action only when the rule action (the query specified in CREATE RULE) is INSERT SELECT. In that case, rule_action points to the entire INSERT SELECT, while sub_action points to the SELECT part. So, we should add the CTE list and set hasModifyingCTE/hasRecursive flags in rule_action.\n\n\n> Hm. So after looking at this more, the problem is that the rewrite\n> is producing something equivalent to\n> \n> INSERT INTO bug6051_2\n> (WITH t1 AS (DELETE FROM bug6051 RETURNING *) SELECT * FROM t1);\n\nYes. In this case, the WITH clause must be put before INSERT.\n\nThe attached patch is based on your version. It includes cosmetic changes to use = instead of |= for boolean variable assignments. make check passed. Also, Greg-san's original failed test case succeeded. I confirmed that the hasModifyingCTE of the top-level rewritten query is set to true now by looking at the output of debug_print_rewritten and debug_print_plan.\n\n\nRegards\nTakayuki Tsunakawa", "msg_date": "Thu, 20 May 2021 14:27:28 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n>> I think either the bit about rule_action is unnecessary, or most of\n>> the code immediately above this is wrong, because it's only updating\n>> flags in sub_action. Why do you think it's necessary to change\n>> rule_action in addition to sub_action?\n\n> Finally, I think I've understood what you meant. Yes, the current code seems to be wrong.\n\nI'm fairly skeptical of this claim, because that code has stood for a\nlong time. Can you provide an example (not involving hasModifyingCTE)\nin which it's wrong?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 May 2021 11:17:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us>\n> \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> > Finally, I think I've understood what you meant. Yes, the current code seems\n> to be wrong.\n> \n> I'm fairly skeptical of this claim, because that code has stood for a\n> long time. Can you provide an example (not involving hasModifyingCTE)\n> in which it's wrong?\n\nHmm, I don't think of an example. I wonder if attaching WITH before INSERT SELECT and putting WITH between INSERT and SELECT produce the same results. Maybe that's why the regression test succeeds with the patch.\n\nTo confirm, the question is that when we have the following rule in place and the client issues the query:\n\n[rule]\nCREATE RULE myrule AS\n ON {INSERT | UPDATE | DELETE} TO orig_table\n DO INSTEAD\n INSERT INTO some_table SELECT ...;\n\n[original query]\nWITH t AS (\n SELECT and/or NOTIFY\n)\n{INSERT INTO | UPDATE | DELETE FROM} orig_table ...;\n\nwhich of the following two queries do we expect?\n\n[generated query 1]\nWITH t AS (\n SELECT and/or NOTIFY\n)\n INSERT INTO some_table SELECT ...;\n\n[generated query 2]\n INSERT INTO some_table\nWITH t AS (\n SELECT and/or NOTIFY\n)\nSELECT ...;\n\nAlthough both may produce the same results, I naturally expected query 1, because WITH was originally attached before the top-level query, and (2) the top-level query has been replaced with a rule action, so it's natural that the WITH is attached before the rule action. Super-abbreviated description is:\n\n x -> y (rule)\n WITH t x (original query)\n WITH t y (generated query 1)\n one-part-of-y WITH t another-part-of-y (generated query 2)\n\nAs we said, we agree to fail the query if it's the above generated query 2 and WITH contains a data-modyfing CTE, if we cannot be confident to accept the change to the WITH position. Which do you think we want to choose?\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n", "msg_date": "Fri, 21 May 2021 06:41:57 +0000", "msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> The attached patch is based on your version. It includes cosmetic\n> changes to use = instead of |= for boolean variable assignments.\n\nI think that's less \"cosmetic\" than \"gratuitous breakage\". The point\nhere is that we are combining two rtables, so the query had better\nend up with flags that describe the union of the rtables' properties.\nOur regression tests are unfortunately not very thorough in this area,\nso it doesn't surprise me that they fail to fall over.\n\nAfter thinking about it for awhile, I'm okay with the concept of\nattaching the source query's CTEs to the parent rule_action so far\nas the semantics are concerned. But this patch fails to implement\nthat correctly. If we're going to do it like that, then the\nctelevelsup fields of any CTE RTEs that refer to those CTEs have\nto be incremented when rule_action is different from sub_action,\nbecause the CTEs are getting attached one level higher in the\nquery nest than the referencing RTEs are. The proposed test case\nfails to expose this, because the rule action isn't INSERT/SELECT,\nso the case of interest isn't being exercised at all. However,\nit's harder than you might think to demonstrate a problem ---\nI first tried\n\nCREATE RULE bug6051_3_ins AS ON INSERT TO bug6051_3 DO INSTEAD\n INSERT INTO bug6051_2 SELECT a FROM bug6051_3;\n\nand that failed to fall over with the patch. Turns out that's\nbecause the SELECT part is simple enough to be pulled up, and\nthe pull-up moves the CTE that's been put into it one level\nhigher, causing it to accidentally have the correct ctelevelsup\nanyway. If you use an INSERT with a non-pull-up-able SELECT\nthen you can see the problem: this script\n\nCREATE TEMP TABLE bug6051_2 (i int);\n\nCREATE TEMP TABLE bug6051_3 AS\n select a from generate_series(11,13) as a;\n\nCREATE RULE bug6051_3_ins AS ON INSERT TO bug6051_3 DO INSTEAD\n INSERT INTO bug6051_2 SELECT sum(a) FROM bug6051_3;\n\nexplain verbose\nWITH t1 AS ( DELETE FROM bug6051_3 RETURNING * )\n INSERT INTO bug6051_3 SELECT * FROM t1;\n\ncauses the patch to fail with\n\nERROR: could not find CTE \"t1\"\n\nNow, we could potentially make this work if we wrote code to run\nthrough the copied rtable entries (recursively) and increment the\nappropriate ctelevelsup fields by one. That would essentially\nhave to be a variant of IncrementVarSublevelsUp that *only* acts\non ctelevelsup and not other level-dependent fields. That's\nwhat I meant when I spoke of moving mountains: the amount of code\nthat would need to go into this seems out of all proportion to\nthe value. I think we should just throw an error, instead.\nAt least till such time as we see actual field complaints.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Sep 2021 18:00:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "On Wed, Sep 8, 2021 at 8:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:\n> > The attached patch is based on your version. It includes cosmetic\n> > changes to use = instead of |= for boolean variable assignments.\n>\n> Now, we could potentially make this work if we wrote code to run\n> through the copied rtable entries (recursively) and increment the\n> appropriate ctelevelsup fields by one. That would essentially\n> have to be a variant of IncrementVarSublevelsUp that *only* acts\n> on ctelevelsup and not other level-dependent fields. That's\n> what I meant when I spoke of moving mountains: the amount of code\n> that would need to go into this seems out of all proportion to\n> the value. I think we should just throw an error, instead.\n> At least till such time as we see actual field complaints.\n>\n\n[I don't think Tsunakawa-san will be responding to this any time soon]\n\nI proposed a patch for this issue in a separate thread:\nhttps://www.postgresql.org/message-id/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com\n\nThe patch takes your previously-reverted patch for this issue and adds an\nerror condition, so it does throw an error for that test case in your\nprevious post.\nIt also affects one existing regression test, since that uses an\nINSERT...SELECT rule action applied to a command with a data-modifying CTE\n(and we shouldn't really be allowing that anyway).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Wed, Sep 8, 2021 at 8:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> \"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com> writes:> > The attached patch is based on your version.  It includes cosmetic> > changes to use = instead of |= for boolean variable assignments.>> Now, we could potentially make this work if we wrote code to run> through the copied rtable entries (recursively) and increment the> appropriate ctelevelsup fields by one.  That would essentially> have to be a variant of IncrementVarSublevelsUp that *only* acts> on ctelevelsup and not other level-dependent fields.  That's> what I meant when I spoke of moving mountains: the amount of code> that would need to go into this seems out of all proportion to> the value.  I think we should just throw an error, instead.> At least till such time as we see actual field complaints.>[I don't think Tsunakawa-san will be responding to this any time soon]I proposed a patch for this issue in a separate thread:https://www.postgresql.org/message-id/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.comThe patch takes your previously-reverted patch for this issue and adds an error condition, so it does throw an error for that test case in your previous post.It also affects one existing regression test, since that uses an INSERT...SELECT rule action applied to a command with a data-modifying CTE (and we shouldn't really be allowing that anyway).Regards,Greg NancarrowFujitsu Australia", "msg_date": "Wed, 8 Sep 2021 11:30:24 +1000", "msg_from": "Greg Nancarrow <gregn4422@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" }, { "msg_contents": "Greg Nancarrow <gregn4422@gmail.com> writes:\n> On Wed, Sep 8, 2021 at 8:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Now, we could potentially make this work if we wrote code to run\n>> through the copied rtable entries (recursively) and increment the\n>> appropriate ctelevelsup fields by one. That would essentially\n>> have to be a variant of IncrementVarSublevelsUp that *only* acts\n>> on ctelevelsup and not other level-dependent fields. That's\n>> what I meant when I spoke of moving mountains: the amount of code\n>> that would need to go into this seems out of all proportion to\n>> the value. I think we should just throw an error, instead.\n>> At least till such time as we see actual field complaints.\n\n> [I don't think Tsunakawa-san will be responding to this any time soon]\n\nOh! I'd not realized that he'd dropped out of the community, but\nchecking my mail folder, I don't see any messages from him in months\n... and his email address is bouncing, too. Too bad.\n\n> I proposed a patch for this issue in a separate thread:\n> https://www.postgresql.org/message-id/CAJcOf-f68DT=26YAMz_i0+Au3TcLO5oiHY5=fL6Sfuits6r+_w@mail.gmail.com\n\nRight, that one looks like an appropriate amount of effort\n(at least till someone gets way more excited about the case\nthan I am). I will mark this CF item Returned With Feedback\nand go see about that one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Sep 2021 10:28:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in query rewriter - hasModifyingCTE not getting set" } ]
[ { "msg_contents": "Hi Hackers,\n\nWhen using psql I found there's no tab completion for upper character inputs. It's really inconvenient sometimes so I try to fix this problem in the attached patch.\n\nHere is the examples to show what this patch can do.\nAction: \n1. connect the db using psql \n2. input SQL command\n3. enter TAB key(twice at the very first time)\n\nResults:\n[master]\npostgres=# set a\nall allow_system_table_mods application_name array_nulls\npostgres=# set A\n\npostgres=# set A\n\n[patched]\npostgres=# set a\nall allow_system_table_mods application_name array_nulls\npostgres=# set A\nALL ALLOW_SYSTEM_TABLE_MODS APPLICATION_NAME ARRAY_NULLS\npostgres=# set A\n\nPlease take a check at this patch. Any comment is welcome.\n\nRegards,\nTang", "msg_date": "Sun, 7 Feb 2021 07:06:09 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Support tab completion for upper character inputs in psql" }, { "msg_contents": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com> writes:\n> When using psql I found there's no tab completion for upper character inputs. It's really inconvenient sometimes so I try to fix this problem in the attached patch.\n\nThis looks like you're trying to force case-insensitive behavior\nwhether that is appropriate or not. Does not sound like a good\nidea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Feb 2021 13:55:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "At Sun, 07 Feb 2021 13:55:00 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> \"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com> writes:\n> > When using psql I found there's no tab completion for upper character inputs. It's really inconvenient sometimes so I try to fix this problem in the attached patch.\n> \n> This looks like you're trying to force case-insensitive behavior\n> whether that is appropriate or not. Does not sound like a good\n> idea.\n\nAgreed. However I'm not sure what the OP exactly wants, \\set behaves\nin a different but similar way.\n\n=# \\set c[tab]\n=# \\set COMP_KEYWORD_CASE _\n\nHowever set doesn't. If it is what is wanted, the following change on\nQuery_for_list_of_set_vars works (only for the case of SET/RESET\ncommands).\n\n\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 5f0e775fd3..5c2a263785 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -725,7 +725,8 @@ static const SchemaQuery Query_for_list_of_statistics = {\n \" UNION ALL SELECT 'role' \"\\\n \" UNION ALL SELECT 'tablespace' \"\\\n \" UNION ALL SELECT 'all') ss \"\\\n-\" WHERE substring(name,1,%d)='%s'\"\n+\" WHERE substring(name,1,%1$d)='%2$s' \"\\\n+\" OR pg_catalog.lower(substring(name,1,%1$d))=pg_catalog.lower('%2$s')\"\n \n #define Query_for_list_of_show_vars \\\n \"SELECT name FROM \"\\\n\n=# set AP[tab]\n=# set application_name _\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Feb 2021 17:02:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "At Sun, 07 Feb 2021 13:55:00 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>\n> This looks like you're trying to force case-insensitive behavior \n> whether that is appropriate or not. Does not sound like a good idea.\n\nThanks for your reply.\nI raise this issue because I thought all SQL command should be case-insensitive.\nAnd the set/reset/show commands work well no matter the input configuration parameter is in upper or in lower case.\nMy modification is not good enough, but I really think it's more convenient if we can support the tab-completion for upper character inputs.\n\n=# set APPLICATION_NAME to test;\nSET\n\n=# show APPLICATION_name;\n application_name\n------------------\n test\n(1 row)\n\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com> \nSent: Monday, February 8, 2021 5:02 PM\n\n>However set doesn't. If it is what is wanted, the following change on Query_for_list_of_set_vars works (only for the case of SET/RESET commands).\n\nThanks for your update. I applied your patch, it works well for SET/RESET commands.\nI added the same modification to SHOW command. The new patch(V2) can support tab completion for upper character inputs in psql for SET/RESET/SHOW commands.\n\nRegards,\nTang", "msg_date": "Mon, 8 Feb 2021 12:12:35 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "At Sun, 07 Feb 2021 13:55:00 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>\n> This looks like you're trying to force case-insensitive behavior \n> whether that is appropriate or not. Does not sound like a good idea.\n\nI'm still confused about the APPROPRIATE behavior of tab completion.\nIt seems ALTER table/tablespace <name> SET/RESET is already case-insensitive.\n\nFor example\n# alter tablespace dbspace set(e[tab]\n# alter tablespace dbspace set(effective_io_concurrency\n\n# alter tablespace dbspace set(E[tab]\n# alter tablespace dbspace set(EFFECTIVE_IO_CONCURRENCY\n\nThe above behavior is exactly the same as what the patch(attached in the following message) did for SET/RESET etc.\nhttps://www.postgresql.org/message-id/flat/a63cbd45e3884cf9b3961c2a6a95dcb7%40G08CNEXMBPEKD05.g08.fujitsu.local\n\nIf anyone can share me some cases which show inappropriate scenarios of forcing case-insensitive inputs in psql.\nI'd be grateful for that.\n\nRegards,\nTang\n\n\n\n\n\n", "msg_date": "Tue, 9 Feb 2021 14:48:02 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On 09.02.21 15:48, Tang, Haiying wrote:\n> I'm still confused about the APPROPRIATE behavior of tab completion.\n> It seems ALTER table/tablespace <name> SET/RESET is already case-insensitive.\n> \n> For example\n> # alter tablespace dbspace set(e[tab]\n> # alter tablespace dbspace set(effective_io_concurrency\n> \n> # alter tablespace dbspace set(E[tab]\n> # alter tablespace dbspace set(EFFECTIVE_IO_CONCURRENCY\n\nThis case completes with a hardcoded list, which is done \ncase-insensitively by default. The cases that complete with a query \nresult are not case insensitive right now. This affects things like\n\nUPDATE T<tab>\n\nas well. I think your first patch was basically right. But we need to \nunderstand that this affects all completions with query results, not \njust the one you wanted to fix. So you should analyze all the callers \nand explain why the proposed change is appropriate.\n\n\n", "msg_date": "Mon, 15 Mar 2021 21:20:22 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Tuesday, March 16, 2021 5:20 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n>The cases that complete with a query \n>result are not case insensitive right now. This affects things like\n>\n>UPDATE T<tab>\n>\n>as well. I think your first patch was basically right. But we need to \n>understand that this affects all completions with query results, not \n>just the one you wanted to fix. So you should analyze all the callers \n>and explain why the proposed change is appropriate.\n\nThanks for your review and suggestion. Please find attached patch V3 which was based on the first patch[1].\nDifference from the first patch is:\n\nAdd tab completion support for all query results in psql.\ncomplete_from_query\n+complete_from_versioned_query\n+complete_from_schema_query\n+complete_from_versioned_schema_query\n\n[1] https://www.postgresql.org/message-id/a63cbd45e3884cf9b3961c2a6a95dcb7%40G08CNEXMBPEKD05.g08.fujitsu.local\n\nThe modification to support case insensitive matching in \" _complete_from_query\" is based on \"complete_from_const and \"complete_from_list\" .\nPlease let me know if you find anything insufficient.\n\nRegards,\nTang", "msg_date": "Mon, 22 Mar 2021 12:41:41 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Hi Tang,\n\nThanks a lot for the patch.\n\nI did a quick test based on the latest patch V3 on latest master branch \n\"commit 4753ef37e0eda4ba0af614022d18fcbc5a946cc9\".\n\nCase 1: before patch\n\n   1 postgres=# set a\n   2 all                      allow_system_table_mods \napplication_name         array_nulls\n   3 postgres=# set A\n   4\n   5 postgres=# create TABLE tbl (data text);\n   6 CREATE TABLE\n   7 postgres=# update tbl SET DATA =\n   8\n   9 postgres=# update T\n  10\n  11 postgres=#\n\nCase 2: after patched\n\n   1 postgres=# set a\n   2 all                      allow_system_table_mods \napplication_name         array_nulls\n   3 postgres=# set A\n   4 ALL                      ALLOW_SYSTEM_TABLE_MODS \nAPPLICATION_NAME         ARRAY_NULLS\n   5 postgres=# create TABLE tbl (data text);\n   6 CREATE TABLE\n   7\n   8 postgres=# update tbl SET DATA =\n   9\n  10 postgres=# update TBL SET\n  11\n  12 postgres=#\n\nSo, as you can see the difference is between line 8 and 10 in case 2. It \nlooks like the lowercase can auto complete more than the uppercase; \nsecondly, if you can add some test cases, it would be great.\n\nBest regards,\nDavid\n\nOn 2021-03-22 5:41 a.m., tanghy.fnst@fujitsu.com wrote:\n> On Tuesday, March 16, 2021 5:20 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>\n>> The cases that complete with a query\n>> result are not case insensitive right now. This affects things like\n>>\n>> UPDATE T<tab>\n>>\n>> as well. I think your first patch was basically right. But we need to\n>> understand that this affects all completions with query results, not\n>> just the one you wanted to fix. So you should analyze all the callers\n>> and explain why the proposed change is appropriate.\n> Thanks for your review and suggestion. Please find attached patch V3 which was based on the first patch[1].\n> Difference from the first patch is:\n>\n> Add tab completion support for all query results in psql.\n> complete_from_query\n> +complete_from_versioned_query\n> +complete_from_schema_query\n> +complete_from_versioned_schema_query\n>\n> [1] https://www.postgresql.org/message-id/a63cbd45e3884cf9b3961c2a6a95dcb7%40G08CNEXMBPEKD05.g08.fujitsu.local\n>\n> The modification to support case insensitive matching in \" _complete_from_query\" is based on \"complete_from_const and \"complete_from_list\" .\n> Please let me know if you find anything insufficient.\n>\n> Regards,\n> Tang\n>\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n", "msg_date": "Tue, 30 Mar 2021 12:05:10 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Wednesday, March 31, 2021 4:05 AM, David Zhang <david.zhang@highgo.ca> wrote\r\n\r\n> 8 postgres=# update tbl SET DATA =\r\n> 9\r\n> 10 postgres=# update TBL SET\r\n> 11\r\n> 12 postgres=#\r\n>\r\n>So, as you can see the difference is between line 8 and 10 in case 2. It \r\n>looks like the lowercase can auto complete more than the uppercase; \r\n>secondly, if you can add some test cases, it would be great.\r\n\r\nThanks for your test. I fix the bug and add some tests for it.\r\nPlease find attached the latest patch V4.\r\n\r\nDifferences from v3 are:\r\n* fix an issue reported by Zhang [1] where a scenario was found which still wasn't able to realize tap completion in query.\r\n* add some tap tests.\r\n\r\n[1] https://www.postgresql.org/message-id/3140db2a-9808-c470-7e60-de39c431b3ab%40highgo.ca\r\n\r\nRegards,\r\nTang", "msg_date": "Thu, 1 Apr 2021 09:40:50 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On 01.04.21 11:40, tanghy.fnst@fujitsu.com wrote:\n> On Wednesday, March 31, 2021 4:05 AM, David Zhang <david.zhang@highgo.ca> wrote\n> \n>> 8 postgres=# update tbl SET DATA =\n>> 9\n>> 10 postgres=# update TBL SET\n>> 11\n>> 12 postgres=#\n>>\n>> So, as you can see the difference is between line 8 and 10 in case 2. It\n>> looks like the lowercase can auto complete more than the uppercase;\n>> secondly, if you can add some test cases, it would be great.\n> \n> Thanks for your test. I fix the bug and add some tests for it.\n> Please find attached the latest patch V4.\n> \n> Differences from v3 are:\n> * fix an issue reported by Zhang [1] where a scenario was found which still wasn't able to realize tap completion in query.\n> * add some tap tests.\n\nSeeing the tests you provided, it's pretty obvious that the current \nbehavior is insufficient. I think we could probably think of a few more \ntests, for example exercising the \"If case insensitive matching was \nrequested initially, adjust the case according to setting.\" case, or \nsomething with quoted identifiers. I'll push this to the next commit \nfest for now. I encourage you to keep working on it.\n\n\n", "msg_date": "Thu, 8 Apr 2021 09:13:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Thursday, April 8, 2021 4:14 PM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote\r\n\r\n>Seeing the tests you provided, it's pretty obvious that the current \r\n>behavior is insufficient. I think we could probably think of a few more \r\n>tests, for example exercising the \"If case insensitive matching was \r\n>requested initially, adjust the case according to setting.\" case, or \r\n>something with quoted identifiers.\r\n\r\nThanks for your review and suggestions on my patch. \r\nI've added more tests in the latest patch V5, the added tests helped me find some bugs in my patch and I fixed them.\r\nNow the patch can support not only the SET/SHOW [PARAMETER] but also UPDATE [\"aTable\"|ATABLE], also UPDATE atable SET [\"aColumn\"|ACOLUMN].\r\n\r\nI really hope someone can have more tests suggestions on my patch or kindly do some tests on my patch and share me if any bugs happened.\r\n\r\nDifferences from V4 are:\r\n* fix some bugs related to quoted identifiers.\r\n* add some tap tests.\r\n\r\nRegards,\r\nTang", "msg_date": "Wed, 14 Apr 2021 13:34:11 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Wed, Apr 14, 2021 at 11:34 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Thursday, April 8, 2021 4:14 PM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote\n>\n> >Seeing the tests you provided, it's pretty obvious that the current\n> >behavior is insufficient. I think we could probably think of a few more\n> >tests, for example exercising the \"If case insensitive matching was\n> >requested initially, adjust the case according to setting.\" case, or\n> >something with quoted identifiers.\n>\n> Thanks for your review and suggestions on my patch.\n> I've added more tests in the latest patch V5, the added tests helped me find some bugs in my patch and I fixed them.\n> Now the patch can support not only the SET/SHOW [PARAMETER] but also UPDATE [\"aTable\"|ATABLE], also UPDATE atable SET [\"aColumn\"|ACOLUMN].\n>\n> I really hope someone can have more tests suggestions on my patch or kindly do some tests on my patch and share me if any bugs happened.\n>\n> Differences from V4 are:\n> * fix some bugs related to quoted identifiers.\n> * add some tap tests.\n\nI tried playing a bit with your psql patch V5 and I did not find any\nproblems - it seemed to work as advertised.\n\nBelow are a few code review comments.\n\n====\n\n1. Patch applies with whitespace warnings.\n\n[postgres@CentOS7-x64 oss_postgres_2PC]$ git apply\n../patches_misc/V5-0001-Support-tab-completion-with-a-query-result-for-upper.patch\n../patches_misc/V5-0001-Support-tab-completion-with-a-query-result-for-upper.patch:130:\ntrailing whitespace.\n}\nwarning: 1 line adds whitespace errors.\n\n====\n\n2. Unrelated \"code tidy\" fixes maybe should be another patch?\n\nI noticed there are a couple of \"code tidy\" fixes combined with this\npatch - e.g. passing fixes to some code comments and blank lines etc\n(see below). Although they are all good improvements, they maybe don't\nreally have anything to do with your feature/bugfix so I am not sure\nif they should be included here. Maybe post a separate patch for these\nones?\n\n@@ -1028,7 +1032,7 @@ static const VersionedQuery\nQuery_for_list_of_subscriptions[] = {\n };\n\n /*\n- * This is a list of all \"things\" in Pgsql, which can show up after CREATE or\n+ * This is a list of all \"things\" in pgsql, which can show up after CREATE or\n * DROP; and there is also a query to get a list of them.\n */\n\n@@ -4607,7 +4642,6 @@ complete_from_list(const char *text, int state)\n if (completion_case_sensitive)\n return pg_strdup(item);\n else\n-\n /*\n * If case insensitive matching was requested initially,\n * adjust the case according to setting.\n@@ -4660,7 +4694,6 @@ complete_from_const(const char *text, int state)\n if (completion_case_sensitive)\n return pg_strdup(completion_charp);\n else\n-\n /*\n * If case insensitive matching was requested initially, adjust\n * the case according to setting.\n\n====\n\n3. Unnecessary NULL check?\n\n@@ -4420,16 +4425,37 @@ _complete_from_query(const char *simple_query,\n PQclear(result);\n result = NULL;\n\n- /* Set up suitably-escaped copies of textual inputs */\n+ /* Set up suitably-escaped copies of textual inputs,\n+ * then change the textual inputs to lower case.\n+ */\n e_text = escape_string(text);\n+ if(e_text != NULL)\n+ {\n+ if(e_text[0] == '\"')\n+ completion_case_sensitive = true;\n+ else\n+ e_text = pg_string_tolower(e_text);\n+ }\n\nPerhaps that check \"if(e_text != NULL)\" is unnecessary. That function\nhardly looks capable of returning a NULL, and other callers are not\nchecking the return like this.\n\n====\n\n4. Memory not freed in multiple places?\n\n@@ -4420,16 +4425,37 @@ _complete_from_query(const char *simple_query,\n PQclear(result);\n result = NULL;\n\n- /* Set up suitably-escaped copies of textual inputs */\n+ /* Set up suitably-escaped copies of textual inputs,\n+ * then change the textual inputs to lower case.\n+ */\n e_text = escape_string(text);\n+ if(e_text != NULL)\n+ {\n+ if(e_text[0] == '\"')\n+ completion_case_sensitive = true;\n+ else\n+ e_text = pg_string_tolower(e_text);\n+ }\n\n if (completion_info_charp)\n+ {\n e_info_charp = escape_string(completion_info_charp);\n+ if(e_info_charp[0] == '\"')\n+ completion_case_sensitive = true;\n+ else\n+ e_info_charp = pg_string_tolower(e_info_charp);\n+ }\n else\n e_info_charp = NULL;\n\n if (completion_info_charp2)\n+ {\n e_info_charp2 = escape_string(completion_info_charp2);\n+ if(e_info_charp2[0] == '\"')\n+ completion_case_sensitive = true;\n+ else\n+ e_info_charp2 = pg_string_tolower(e_info_charp2);\n+ }\n else\n e_info_charp2 = NULL;\n\nThe function escape_string has a comment saying \"The returned value\nhas to be freed.\" but in the above code you are overwriting the\nescape_string result with the strdup'ed pg_string_tolower but without\nfree-ing the original e_text/e_info_charp/e_info_charp2.\n\n======\n\n5. strncmp replacement?\n\n@@ -4464,7 +4490,7 @@ _complete_from_query(const char *simple_query,\n */\n if (strcmp(schema_query->catname,\n \"pg_catalog.pg_class c\") == 0 &&\n- strncmp(text, \"pg_\", 3) != 0)\n+ strncmp(pg_string_tolower(text), \"pg_\", 3) != 0)\n {\n appendPQExpBufferStr(&query_buffer,\n \" AND c.relnamespace <> (SELECT oid FROM\"\n\nWhy not use strnicmp for case insensitive compare here instead of\nstrdup'ing another string (and not freeing it)?\n\nOr maybe use pg_strncasecmp.\n\n======\n\n6. byte_length == 0?\n\n@@ -4556,7 +4582,16 @@ _complete_from_query(const char *simple_query,\n while (list_index < PQntuples(result) &&\n (item = PQgetvalue(result, list_index++, 0)))\n if (pg_strncasecmp(text, item, byte_length) == 0)\n- return pg_strdup(item);\n+ {\n+ if (byte_length == 0 || completion_case_sensitive)\n+ return pg_strdup(item);\n+ else\n+ /*\n+ * If case insensitive matching was requested initially,\n+ * adjust the case according to setting.\n+ */\n+ return pg_strdup_keyword_case(item, text);\n+ }\n }\nThe byte_length was not being checked before, so why is the check needed now?\n\n======\n\n7. test typo \"ralation\"\n\n+# check query command completion for upper character ralation name\n+check_completion(\"update TAB1 SET \\t\", qr/update TAB1 SET \\af/,\n\"complete column name for TAB1\");\n\n======\n\n8. test typo \"case-insensitiveq\"\n\n+# check schema query(upper case) which is case-insensitiveq\n+check_completion(\"select oid from Pg_cla\\t\", qq/select oid from\nPg_cla\\b\\b\\b\\b\\bG_CLASS /, \"complete schema query with uppper case\nstring\");\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Apr 2021 14:23:43 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Wednesday, April 21, 2021 1:24 PM, Peter Smith <smithpb2250@gmail.com> Wrote\r\n\r\n>I tried playing a bit with your psql patch V5 and I did not find any\r\n>problems - it seemed to work as advertised.\r\n>\r\n>Below are a few code review comments.\r\n\r\nThanks for you review. I've updated the patch to V6 according to your comments.\r\n\r\n>1. Patch applies with whitespace warnings.\r\nFixed.\r\n\r\n>2. Unrelated \"code tidy\" fixes maybe should be another patch?\r\nAgreed. Will post this modification on another thread.\r\n\r\n>3. Unnecessary NULL check?\r\nAgreed. NULL check removed.\r\n\r\n>4. Memory not freed in multiple places?\r\noops. Memory free added.\r\n\r\n>5. strncmp replacement?\r\nAgreed. Thanks for your advice. Since this modification has little relation with my patch here.\r\nI will merge this with comment(2) and push this on another patch.\r\n\r\n>6. byte_length == 0?\r\n>The byte_length was not being checked before, so why is the check needed now?\r\n\r\nWe need to make sure the empty input to be case sensitive as before(HEAD).\r\nFor example\r\n\tCREATE TABLE onetab1 (f1 int);\r\n\tupdate onetab1 SET [tab]\r\n\r\nWithout the check of \"byte_length == 0\", pg_strdup_keyword_case will make the column name \"f1\" to be upper case \"F1\".\r\nNamely, the output will be \" update onetab1 SET F1\" which is not so good.\r\n\r\nI added some tab tests for this empty input case, too. \r\n\r\n>7. test typo \"ralation\"\r\n>8. test typo \"case-insensitiveq\"\r\nThanks, typo fixed. \r\n\r\nAny further comment is very welcome.\r\n\r\nRegards,\r\nTang", "msg_date": "Thu, 22 Apr 2021 12:43:42 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "At Thu, 22 Apr 2021 12:43:42 +0000, \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> wrote in \n> On Wednesday, April 21, 2021 1:24 PM, Peter Smith <smithpb2250@gmail.com> Wrot> >4. Memory not freed in multiple places?\n> oops. Memory free added.\n\nAll usages of pg_string_tolower don't need a copy.\nSo don't we change the function to in-place converter?\n\n> >6. byte_length == 0?\n> >The byte_length was not being checked before, so why is the check needed now?\n> \n> We need to make sure the empty input to be case sensitive as before(HEAD).\n> For example\n> \tCREATE TABLE onetab1 (f1 int);\n> \tupdate onetab1 SET [tab]\n> \n> Without the check of \"byte_length == 0\", pg_strdup_keyword_case will make the column name \"f1\" to be upper case \"F1\".\n> Namely, the output will be \" update onetab1 SET F1\" which is not so good.\n> \n> I added some tab tests for this empty input case, too. \n> \n> >7. test typo \"ralation\"\n> >8. test typo \"case-insensitiveq\"\n> Thanks, typo fixed. \n> \n> Any further comment is very welcome.\n\n \t\tif (completion_info_charp)\n+\t\t{\n \t\t\te_info_charp = escape_string(completion_info_charp);\n+\t\t\tif(e_info_charp[0] == '\"')\n+\t\t\t\tcompletion_case_sensitive = true;\n+\t\t\telse\n+\t\t\t{\n+\t\t\t\tle_str = pg_string_tolower(e_info_charp);\n\nIt seems right to lower completion_info_charp and ..2 but it is not\nright that change completion_case_sensitive here, which only affects\nthe returned candidates. This change prevents the following operation\nfrom getting the expected completion candidates.\n\n=# create table \"T\" (a int) partition by range(a);\n=# create table c1 partition of \"T\" for values from (0) to (10);\n=# alter table \"T\" drop partition C<tab>\n\nIs there any reason for doing that?\n\n\n\n+\t\t\t\tif (byte_length == 0 || completion_case_sensitive)\n\nIs the condition \"byte_length == 0 ||\" right?\n\nThis results in a maybe-unexpected behavior,\n\n=# \\set COM_KEYWORD_CASE upper\n=# create table t (a int) partition by range(a);\n=# create table d1 partition of t for values from (0) to (10);\n=# alter table t drop partition <tab>\n\nThis results in \n\n=# alter table t drop partition d1\n\nI think we are expecting D1 as the result.\n\nBy the way COMP_KEYWORD_CASE suggests that *keywords* are completed\nfollowing the setting. However, they are not keywords, but\nidentifiers. And some people (including me) might dislike that\nkeywords and identifiers follow the same setting. Specifically I\nsometimes want keywords to be upper-cased but identifiers (always) be\nlower-cased.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Apr 2021 11:58:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> All usages of pg_string_tolower don't need a copy.\n> So don't we change the function to in-place converter?\n\nDoesn't seem like a good idea, because that locks us into an assumption\nthat the downcasing conversion doesn't change the string's physical\nlength. There are a lot of counterexamples to that :-(. I'm not sure\nthat we actually implement such cases correctly today, but let's not\nbuild APIs that prevent it from being fixed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Apr 2021 23:17:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "At Fri, 23 Apr 2021 11:58:12 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Any further comment is very welcome.\n\nOh, I accidentally found a doubious behsbior.\n\n=# alter table public.<tab>\npublic.c1 public.d1 public.\"t\" public.t public.\"tt\" \n\nThe \"t\" and \"tt\" are needlessly lower-cased.\n\n# \\d\n List of relations\n Schema | Name | Type | Owner \n--------+--------------------+-------------------+----------\n public | T | partitioned table | horiguti\n public | TT | table | horiguti\n public | c1 | table | horiguti\n public | d1 | table | horiguti\n public | t | partitioned table | horiguti\n\n=# alter table public.\"<tab>\n=# alter table public.\"t -- candidates are \"t\" and \"tt\"?\n=# alter table public.\"tt<tab> -- nothing happenes\n=# alter table public.\"TT<tab> -- also nothing happenes\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Apr 2021 12:25:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "At Thu, 22 Apr 2021 23:17:19 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > All usages of pg_string_tolower don't need a copy.\n> > So don't we change the function to in-place converter?\n> \n> Doesn't seem like a good idea, because that locks us into an assumption\n> that the downcasing conversion doesn't change the string's physical\n> length. There are a lot of counterexamples to that :-(. I'm not sure\n\nMmm. I didn't know of that.\n\n> that we actually implement such cases correctly today, but let's not\n> build APIs that prevent it from being fixed.\n\nAgreed. Thanks for the knowledge.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 23 Apr 2021 12:34:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Thu, 22 Apr 2021 23:17:19 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> Doesn't seem like a good idea, because that locks us into an assumption\n>> that the downcasing conversion doesn't change the string's physical\n>> length. There are a lot of counterexamples to that :-(. I'm not sure\n\n> Mmm. I didn't know of that.\n\nThe two examples I know of offhand are in German (eszett \"ß\" downcases to\n\"ss\") and Turkish (dotted \"Í\" downcases to \"i\", likewise dotless \"I\"\ndowncases to \"ı\"; one of each of those pairs is an ASCII letter, the\nother is not). Depending on which encoding is in use, these\ntransformations *could* be the same number of bytes, but they could\nequally well not be. There are probably other examples.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Apr 2021 00:17:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "FWIW...\r\n\r\nAt Fri, 23 Apr 2021 00:17:35 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \r\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\r\n> > At Thu, 22 Apr 2021 23:17:19 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \r\n> >> Doesn't seem like a good idea, because that locks us into an assumption\r\n> >> that the downcasing conversion doesn't change the string's physical\r\n> >> length. There are a lot of counterexamples to that :-(. I'm not sure\r\n> \r\n> > Mmm. I didn't know of that.\r\n> \r\n> The two examples I know of offhand are in German (eszett \"ß\" downcases to\r\n> \"ss\") and Turkish (dotted \"Í\" downcases to \"i\", likewise dotless \"I\"\r\n\r\nAccording to Wikipedia, \"ss\" is equivalent to \"ß\" and their upper case\r\nletters are \"SS\" and \"ẞ\" respectively. (I didn't even know of the\r\nexistence of \"ẞ\". AFAIK there's no word begins with eszett, but it\r\nseems that there's a case where \"ẞ\" appears in a word is spelled only\r\nwith capital letters.\r\n\r\n> downcases to \"ı\"; one of each of those pairs is an ASCII letter, the\r\n> other is not). Depending on which encoding is in use, these\r\n\r\nUpper dotless \"I\" and lower dotted \"i\" are in ASCII (or English\r\nalphabet?). That's interesting.\r\n\r\n> transformations *could* be the same number of bytes, but they could\r\n> equally well not be. There are probably other examples.\r\n\r\nYeah. Agreed.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 23 Apr 2021 14:44:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Fri, 2021-04-23 at 14:44 +0900, Kyotaro Horiguchi wrote:\n> > The two examples I know of offhand are in German (eszett \"ß\" downcases to\n> > \"ss\") and Turkish (dotted \"Í\" downcases to \"i\", likewise dotless \"I\"\n> \n> According to Wikipedia, \"ss\" is equivalent to \"ß\" and their upper case\n> letters are \"SS\" and \"ẞ\" respectively. (I didn't even know of the\n> existence of \"ẞ\". AFAIK there's no word begins with eszett, but it\n> seems that there's a case where \"ẞ\" appears in a word is spelled only\n> with capital letters.\n\nThis \"capital sharp s\" is a recent invention that has never got much\ntraction. I notice that on my Fedora 32 system with glibc 2.31 and de_DE.utf8,\n\nSELECT lower(E'\\u1E9E') = E'\\u00DF', upper(E'\\u00DF') = E'\\u1E9E';\n\n ?column? │ ?column? \n══════════╪══════════\n t │ f\n(1 row)\n\nwhich to me as a German speaker makes no sense.\n\nBut Tom's example was the wrong way around: \"ß\" is a lower case letter,\nand the traditional upper case translation is \"SS\".\n\nBut the Turkish example is correct:\n\n> > downcases to \"ı\"; one of each of those pairs is an ASCII letter, the\n> > other is not). Depending on which encoding is in use, these\n> \n> Upper dotless \"I\" and lower dotted \"i\" are in ASCII (or English\n> alphabet?). That's interesting.\n\nYes. In languages other than Turkish, \"i\" is the lower case version of \"I\",\nand both are ASCII. Only Turkish has an \"ı\" (U+0131) and an \"İ\" (U+0130).\nThat causes annoyance for Turks who create a table named KADIN and find\nthat PostgreSQL turns it into \"kadin\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 23 Apr 2021 10:33:39 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Hi \n\nI've updated the patch to V7 based on the following comments. \n\nOn Friday, April 23, 2021 11:58 AM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote\n>All usages of pg_string_tolower don't need a copy.\n>So don't we change the function to in-place converter?\n\nRefer to your later discussion with Tom. Keep the code as it is.\n\n>\t\tif (completion_info_charp)\n>+\t\t{\n> \t\t\te_info_charp = escape_string(completion_info_charp);\n>+\t\t\tif(e_info_charp[0] == '\"')\n>+\t\t\t\tcompletion_case_sensitive = true;\n>+\t\t\telse\n>+\t\t\t{\n>+\t\t\t\tle_str = pg_string_tolower(e_info_charp);\n>\n>It seems right to lower completion_info_charp and ..2 but it is not\n>right that change completion_case_sensitive here, which only affects\n>the returned candidates. \n\nAgreed, code \" completion_case_sensitive = true;\" removed.\n\n>By the way COMP_KEYWORD_CASE suggests that *keywords* are completed\n>following the setting. However, they are not keywords, but\n>identifiers. And some people (including me) might dislike that\n>keywords and identifiers follow the same setting. Specifically I\n>sometimes want keywords to be upper-cased but identifiers (always) be\n>lower-cased.\n\nChanged my design based on your suggestion. Now the upper character inputs for identifiers will always turn to lower case(regardless COMP_KEYWORD_CASE) which I think can be accepted by most of PG users. \n Eg: SET BYT<tab> / SET Byt<tab>\n output when apply V6 patch: SET BYTEA_OUTPUT\n output when apply V7 patch: SET bytea_output\n\nOn Friday, April 23, 2021 12:26 PM, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote\n>Oh, I accidentally found a doubious behsbior.\n>\n>=# alter table public.<tab>\n>public.c1 public.d1 public.\"t\" public.t public.\"tt\" \n>\n>The \"t\" and \"tt\" are needlessly lower-cased.\n\nGood catch. I didn’t think of schema stuff before. \nBug fixed. Add tap tests for this scenario.\n\nPlease let me know if you find more insufficient issue in the patch. Any further suggestion is very welcome.\n\nRegards,\nTang", "msg_date": "Mon, 26 Apr 2021 13:47:13 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Hi \n\nI've updated the patch to V8 since Tom, Kyotaro and Laurenz discussed the lower case issue of German/Turkish language at [1].\n\nDifferences from V7 are:\n* Add a function valid_input_text which checks the input text to see if it only contains alphabet letters, numbers etc.\n* Delete the flag setting of \"completion_case_sensitive=false\" which introduced in V1 patch and no use now.\n\nAs you can see, now the patch limited the lower case transform of the input to alphabet letters.\nBy doing that, language like German/Turkish will not affected by this patch.\n\nAny comment or suggestion on this patch is very welcome.\n\n[1]\nhttps://www.postgresql.org/message-id/1282887.1619151455%40sss.pgh.pa.us\nhttps://www.postgresql.org/message-id/20210423.144443.2058612313278551429.horikyota.ntt%40gmail.com\nhttps://www.postgresql.org/message-id/a75a6574c0e3d4773ba20a73d493c2c9983c0657.camel%40cybertec.at\n\nRegards,\nTang", "msg_date": "Wed, 23 Jun 2021 12:43:53 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On 23.06.21 14:43, tanghy.fnst@fujitsu.com wrote:\n> I've updated the patch to V8 since Tom, Kyotaro and Laurenz discussed the lower case issue of German/Turkish language at [1].\n> \n> Differences from V7 are:\n> * Add a function valid_input_text which checks the input text to see if it only contains alphabet letters, numbers etc.\n> * Delete the flag setting of \"completion_case_sensitive=false\" which introduced in V1 patch and no use now.\n> \n> As you can see, now the patch limited the lower case transform of the input to alphabet letters.\n> By doing that, language like German/Turkish will not affected by this patch.\n> \n> Any comment or suggestion on this patch is very welcome.\n\nThe coding of valid_input_text() seems a bit bulky. I think you can do \nthe same thing using strspn() without a loop.\n\nThe name is also not great. It's not like other strings are not \"valid\".\n\nThere is also no explanation why that specific set of characters is \nallowed and not others. Does it have something to do with identifier \nsyntax? This needs to be explained.\n\nSeeing that valid_input_text() is always called together with \npg_string_tolower(), I think those could be combined into one function, \nlike pg_string_tolower_if_ascii() is whatever. That would save a lot of \nrepetition.\n\nThere are a couple of queries where the result is *not* \ncase-insensitive, namely\n\nQuery_for_list_of_enum_values\nQuery_for_list_of_available_extension_versions\n\n(and their variants). These are cases where the query result is not \nused as an identifier but as a (single-quoted) string. So that needs to \nbe handled somehow, perhaps by adding a COMPLETE_WITH_QUERY_CS() similar \nto COMPLETE_WITH_CS().\n\n(A test case for the enum case should be doable easily.)\n\n\n", "msg_date": "Tue, 7 Sep 2021 10:25:20 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Tuesday, September 7, 2021 5:25 PM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>The coding of valid_input_text() seems a bit bulky. I think you can do \n>the same thing using strspn() without a loop.\n\nThanks, modified in V9 patch.\n\n>The name is also not great. It's not like other strings are not \"valid\".\n\nModified.\nvalid_input_text() renamed to check_input_text()\n\n>There is also no explanation why that specific set of characters is \n>allowed and not others. Does it have something to do with identifier \n>syntax? This needs to be explained.\n\nAdded some comments for pg_string_tolower_if_ascii().\nFor language like German/Turkish, it's not a good idea to lower the input text \nbecause the upper case words may not retain the same meaning.(Pointed at [1~3])\n\n>Seeing that valid_input_text() is always called together with \n>pg_string_tolower(), I think those could be combined into one function, \n>like pg_string_tolower_if_ascii() is whatever. That would save a lot of \n>repetition.\n\nModified.\n\n>There are a couple of queries where the result is *not* \n>case-insensitive, namely\n>\n>Query_for_list_of_enum_values\n>Query_for_list_of_available_extension_versions\n>\n>(and their variants). These are cases where the query result is not \n>used as an identifier but as a (single-quoted) string. So that needs to \n>be handled somehow, perhaps by adding a COMPLETE_WITH_QUERY_CS() similar \n>to COMPLETE_WITH_CS().\n\nHmm, I think 'a (single-quoted) string' identifier behaves the same way with or without my patch.\nCould your please give me an example on that?(to help me figure out why we need something like COMPLETE_WITH_QUERY_CS())\n\n>(A test case for the enum case should be doable easily.)\n\nTest added.\n\nBTW, I found tap completion for enum value is not perfect on HEAD.\nMaybe I will fix this problem in another thread.\n\nexample:\n=# create type pp_colors as enum ('green', 'blue', 'black');\n=# ALTER TYPE pp_colors RENAME VALUE 'b[tab]\n=# alter type pp_colors rename value 'b' <- blue is not auto completed as expected\n\n[1] https://www.postgresql.org/message-id/1282887.1619151455%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/20210423.144443.2058612313278551429.horikyota.ntt%40gmail.com\n[3] https://www.postgresql.org/message-id/a75a6574c0e3d4773ba20a73d493c2c9983c0657.camel%40cybertec.at\n\nRegards,\nTang", "msg_date": "Fri, 10 Sep 2021 13:50:31 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On 10.09.21 15:50, tanghy.fnst@fujitsu.com wrote:\n>> (A test case for the enum case should be doable easily.)\n> Test added.\n\nThe enum test is failing on *some* platforms:\n\nt/010_tab_completion.pl .. 26/?\n# Failed test 'complete enum values'\n# at t/010_tab_completion.pl line 211.\n# Actual output was \"ALTER TYPE mytype1 RENAME VALUE '\\a\\r\\n'BLUE' \n'bLACK' 'green' \\r\\npostgres=# ALTER TYPE mytype1 RENAME VALUE '\"\n# Did not match \"(?^:'bLACK' + 'BLUE' + 'green')\"\n\nSo the ordering of the suggested completions is different. I don't know \noffhand how that ordering is determined. Perhaps it's dependent on \nlocale, readline version, or operating system. In any case, we need to \nfigure this out to make this test stable.\n\n\n", "msg_date": "Thu, 6 Jan 2022 08:46:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> So the ordering of the suggested completions is different. I don't know \n> offhand how that ordering is determined. Perhaps it's dependent on \n> locale, readline version, or operating system. In any case, we need to \n> figure this out to make this test stable.\n\nI don't think we want to get into the business of trying to make that\nconsistent across different readline/libedit versions. How about\nadjusting the test case so that only one enum value is to be printed?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 06 Jan 2022 09:56:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Thursday, January 6, 2022 11:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > So the ordering of the suggested completions is different. I don't know\n> > offhand how that ordering is determined. Perhaps it's dependent on\n> > locale, readline version, or operating system. In any case, we need to\n> > figure this out to make this test stable.\n>\n> I don't think we want to get into the business of trying to make that\n> consistent across different readline/libedit versions. How about\n> adjusting the test case so that only one enum value is to be printed?\n> \n\nThanks for your suggestion. Agreed. \nFixed the test case to show only one enum value.\n\nRegards,\nTang", "msg_date": "Fri, 7 Jan 2022 02:12:23 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "\nOn Fri, 07 Jan 2022 at 10:12, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:\n> On Thursday, January 6, 2022 11:57 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> > So the ordering of the suggested completions is different. I don't know\n>> > offhand how that ordering is determined. Perhaps it's dependent on\n>> > locale, readline version, or operating system. In any case, we need to\n>> > figure this out to make this test stable.\n>>\n>> I don't think we want to get into the business of trying to make that\n>> consistent across different readline/libedit versions. How about\n>> adjusting the test case so that only one enum value is to be printed?\n>>\n>\n> Thanks for your suggestion. Agreed.\n> Fixed the test case to show only one enum value.\n>\n\n+/*\n+ * pg_string_tolower - Fold a string to lower case if the string is not quoted\n+ * and only contains ASCII characters.\n+ * For German/Turkish etc text, no change will be made.\n+ *\n+ * The returned value has to be freed.\n+ */\n+static char *\n+pg_string_tolower_if_ascii(const char *text)\n+{\n\ns/pg_string_tolower/pg_string_tolower_if_ascii/ for comments.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 07 Jan 2022 12:08:17 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Friday, January 7, 2022 1:08 PM, Japin Li <japinli@hotmail.com> wrote:\n> +/*\n> + * pg_string_tolower - Fold a string to lower case if the string is not quoted\n> + * and only contains ASCII characters.\n> + * For German/Turkish etc text, no change will be made.\n> + *\n> + * The returned value has to be freed.\n> + */\n> +static char *\n> +pg_string_tolower_if_ascii(const char *text)\n> +{\n> \n> s/pg_string_tolower/pg_string_tolower_if_ascii/ for comments.\n> \n\nThanks for your review.\nComment fixed in the attached V11 patch.\n\nRegards,\nTang", "msg_date": "Fri, 7 Jan 2022 05:17:21 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On 07.01.22 06:17, tanghy.fnst@fujitsu.com wrote:\n> On Friday, January 7, 2022 1:08 PM, Japin Li <japinli@hotmail.com> wrote:\n>> +/*\n>> + * pg_string_tolower - Fold a string to lower case if the string is not quoted\n>> + * and only contains ASCII characters.\n>> + * For German/Turkish etc text, no change will be made.\n>> + *\n>> + * The returned value has to be freed.\n>> + */\n>> +static char *\n>> +pg_string_tolower_if_ascii(const char *text)\n>> +{\n>>\n>> s/pg_string_tolower/pg_string_tolower_if_ascii/ for comments.\n>>\n> \n> Thanks for your review.\n> Comment fixed in the attached V11 patch.\n\nAs I just posted over at [0], the tab completion of enum values appears \nto be broken at the moment, so I can't really analyze what impact your \npatch would have on it. (But it makes me suspicious about the test case \nin your patch.) I suspect it would treat enum labels as \ncase-insensitive, which would be wrong. But we need to fix that issue \nfirst before we can proceed here.\n\nThe rest of the patch seems ok in principle, since AFAICT enums are the \nonly query result in tab-complete.c that are not identifiers and thus \nsubject to case issues.\n\nI would perhaps move the pg_string_tolower_if_ascii() calls to before \nescape_string() in each case. It won't make a difference to the result, \nbut it seems conceptually better.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/8ca82d89-ec3d-8b28-8291-500efaf23b25@enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Jan 2022 12:29:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The rest of the patch seems ok in principle, since AFAICT enums are the \n> only query result in tab-complete.c that are not identifiers and thus \n> subject to case issues.\n\nI spent some time looking at this patch. I'm not very happy with it,\nfor two reasons:\n\n1. The downcasing logic in the patch bears very little resemblance\nto the backend's actual downcasing logic, which can be found in\nsrc/backend/parser/scansup.c's downcase_identifier(). Notably,\nthe patch's restriction to only convert all-ASCII strings seems\nindefensible, because that's not how things really work. I fear\nwe can't always exactly duplicate the backend's behavior, because\nit's dependent on the server's locale and encoding; but I think\nwe should at least get it right in the common case where psql is\nusing the same locale and encoding as the server.\n\n2. I don't think there's been much thought about the larger picture\nof what is to be accomplished. Right now, we successfully\ntab-complete inputs that are prefixes of the canonical spelling (per\nquote_identifier) of the object's name, and don't try at all for\nnon-canonical spellings. I'm on board with trying to allow some of\nthe latter but I'm not sure that this patch represents much forward\nprogress. To be definite about it, suppose we have a DB containing\njust two tables whose names start with \"m\", say mytab and mixedTab.\nThen:\n\n(a) m<TAB> immediately completes mytab, ignoring mixedTab\n\n(b) \"m<TAB> immediately completes \"mixedTab\", ignoring mytab\n\n(c) \"my<TAB> fails to find anything\n\n(d) mi<TAB> fails to find anything\n\n(e) M<TAB> fails to find anything\n\nThis patch proposes to improve case (e), but to my taste cases (a)\nthrough (c) are much bigger problems. It'd be nice if (d) worked too\n--- that'd require injecting a double-quote where the user had not\ntyped one, but we already do the equivalent thing with single-quotes\nfor file names, so why not? (Although after fighting with readline\nyesterday to try to get it to handle single-quoted enum labels sanely,\nI'm not 100% sure if (d) is possible.)\n\nAlso, even for case (e), what we have with this patch is that it\nimmediately completes mytab, ignoring mixedTab. Is that what we want?\nAnother example is that miX<TAB> fails to find anything, which seems\nlike a POLA violation given that mY<TAB> completes to mytab.\n\nI'm not certain how many of these alternatives can be supported\nwithout introducing ambiguity that wasn't there before (which'd\nmanifest as failing to complete in cases where the existing code\nchooses an alternative just fine). But I really don't like the\nexisting behavior for (b) and (c) --- I should be able to spell\na name with double quotes if I want, without losing completion\nsupport.\n\nBTW, another thing that maybe we should think about is how this\ninteracts with the pattern matching capability in \\d and friends.\nIf people can tab-complete non-canonical spellings, they might\nexpect the same spellings to work in \\d. I don't say that this\npatch has to fix that, but we might want to look and be sure we're\nnot painting ourselves into a corner (especially since I see\nthat we already perform tab-completion in that context).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Jan 2022 13:51:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Hi,\n\nOn Sat, Jan 15, 2022 at 01:51:26PM -0500, Tom Lane wrote:\n> \n> I spent some time looking at this patch. I'm not very happy with it,\n> for two reasons:\n> [...]\n\nOn top of that the patch doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_2979.log\n=== Applying patches on top of PostgreSQL commit ID 5987feb70b5bbb1fc4e64d433f490df08d91dd45 ===\n=== applying patch ./v11-0001-Support-tab-completion-with-a-query-result-for-u.patch\npatching file src/bin/psql/t/010_tab_completion.pl\nHunk #1 FAILED at 41.\nHunk #2 succeeded at 150 (offset 1 line).\n1 out of 2 hunks FAILED -- saving rejects to file src/bin/psql/t/010_tab_completion.pl.rej\n\nI'm switching the CF entry to Waiting on Author.\n\n\n", "msg_date": "Wed, 19 Jan 2022 16:59:20 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Sunday, January 16, 2022 3:51 AM, Tom Lane <tgl@sss.pgh.pa.us> said:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > The rest of the patch seems ok in principle, since AFAICT enums are the\n> > only query result in tab-complete.c that are not identifiers and thus\n> > subject to case issues.\n> \n> I spent some time looking at this patch. I'm not very happy with it,\n> for two reasons:\n> \n> 1. The downcasing logic in the patch bears very little resemblance\n> to the backend's actual downcasing logic, which can be found in\n> src/backend/parser/scansup.c's downcase_identifier(). Notably,\n> the patch's restriction to only convert all-ASCII strings seems\n> indefensible, because that's not how things really work. I fear\n> we can't always exactly duplicate the backend's behavior, because\n> it's dependent on the server's locale and encoding; but I think\n> we should at least get it right in the common case where psql is\n> using the same locale and encoding as the server.\n\nThanks for your suggestion, I removed ASCII strings check function\nand added single byte encoding check just like downcase_identifier.\nAlso added PGCLIENTENCODING setting in the test script to make \ntest cases pass.\nNow the patch supports tab-completion with none-quoted upper characters\navailable when client encoding is in single byte.\n\n> 2. I don't think there's been much thought about the larger picture\n> of what is to be accomplished. Right now, we successfully\n> tab-complete inputs that are prefixes of the canonical spelling (per\n> quote_identifier) of the object's name, and don't try at all for\n> non-canonical spellings. I'm on board with trying to allow some of\n> the latter but I'm not sure that this patch represents much forward\n> progress. To be definite about it, suppose we have a DB containing\n> just two tables whose names start with \"m\", say mytab and mixedTab.\n> Then:\n> \n> (a) m<TAB> immediately completes mytab, ignoring mixedTab\n> \n> (b) \"m<TAB> immediately completes \"mixedTab\", ignoring mytab\n> \n> (c) \"my<TAB> fails to find anything\n> \n> (d) mi<TAB> fails to find anything\n> \n> (e) M<TAB> fails to find anything\n> \n> This patch proposes to improve case (e), but to my taste cases (a)\n> through (c) are much bigger problems. It'd be nice if (d) worked too\n> --- that'd require injecting a double-quote where the user had not\n> typed one, but we already do the equivalent thing with single-quotes\n> for file names, so why not? (Although after fighting with readline\n> yesterday to try to get it to handle single-quoted enum labels sanely,\n> I'm not 100% sure if (d) is possible.)\n> \n> Also, even for case (e), what we have with this patch is that it\n> immediately completes mytab, ignoring mixedTab. Is that what we want?\n> Another example is that miX<TAB> fails to find anything, which seems\n> like a POLA violation given that mY<TAB> completes to mytab.\n>\n> I'm not certain how many of these alternatives can be supported\n> without introducing ambiguity that wasn't there before (which'd\n> manifest as failing to complete in cases where the existing code\n> chooses an alternative just fine). But I really don't like the\n> existing behavior for (b) and (c) --- I should be able to spell\n> a name with double quotes if I want, without losing completion\n> support.\n\nYou are right, it's more convenient in that way.\nI haven't thought about it before. By now, the patch suppose:\nIf user needs to type a table with name in upper character, \nthey should input the double quotes by themselves. If the double \nquote is input by a user, only table name with upper character could be searched.\n\nI may try to implement as you expected but it seems not so easy. \n(as you said, without introducing ambiguity that wasn't there before)\nI'd appreciate if someone could give me a hint/hand on this.\n\n> BTW, another thing that maybe we should think about is how this\n> interacts with the pattern matching capability in \\d and friends.\n> If people can tab-complete non-canonical spellings, they might\n> expect the same spellings to work in \\d. I don't say that this\n> patch has to fix that, but we might want to look and be sure we're\n> not painting ourselves into a corner (especially since I see\n> that we already perform tab-completion in that context).\n\nYes. Agreed, if we solve the previous problem, \nmeta-command tab completion should also be considered.\n\nRegards,\nTang", "msg_date": "Thu, 20 Jan 2022 07:37:18 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On 20.01.22 08:37, tanghy.fnst@fujitsu.com wrote:\n>> 1. The downcasing logic in the patch bears very little resemblance\n>> to the backend's actual downcasing logic, which can be found in\n>> src/backend/parser/scansup.c's downcase_identifier(). Notably,\n>> the patch's restriction to only convert all-ASCII strings seems\n>> indefensible, because that's not how things really work. I fear\n>> we can't always exactly duplicate the backend's behavior, because\n>> it's dependent on the server's locale and encoding; but I think\n>> we should at least get it right in the common case where psql is\n>> using the same locale and encoding as the server.\n> Thanks for your suggestion, I removed ASCII strings check function\n> and added single byte encoding check just like downcase_identifier.\n> Also added PGCLIENTENCODING setting in the test script to make\n> test cases pass.\n> Now the patch supports tab-completion with none-quoted upper characters\n> available when client encoding is in single byte.\n\nThe way your patch works now is that the case-insensitive behavior you \nare implementing only works if the client encoding is a single-byte \nencoding. This isn't what downcase_identifier() does; \ndowncase_identifier() always works for ASCII characters. As it is, this \npatch is nearly useless, since very few people use single-byte client \nencodings anymore. Also, I think it would be highly confusing if the \ntab completion behavior depended on the client encoding in a significant \nway.\n\nAlso, as I had previously suspected, your patch treats the completion of \nenum labels in a case-insensitive way (since it all goes through \n_complete_from_query()), but enum labels are not case insensitive. You \ncan observe this behavior using this test case:\n\n+check_completion(\"ALTER TYPE enum1 RENAME VALUE 'F\\t\\t\", qr|foo|, \"FIXME\");\n+\n+clear_line();\n\nYou should devise a principled way to communicate to \n_complete_from_query() whether it should do case-sensitive or \n-insensitive completion. We already have COMPLETE_WITH() and \nCOMPLETE_WITH_CS() etc. to do this in other cases, so it should be \nstraightforward to adapt a similar system.\n\n\n", "msg_date": "Mon, 24 Jan 2022 10:35:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Monday, January 24, 2022 6:36 PM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\r\n> The way your patch works now is that the case-insensitive behavior you\r\n> are implementing only works if the client encoding is a single-byte\r\n> encoding. This isn't what downcase_identifier() does;\r\n> downcase_identifier() always works for ASCII characters. As it is, this\r\n> patch is nearly useless, since very few people use single-byte client\r\n> encodings anymore. Also, I think it would be highly confusing if the\r\n> tab completion behavior depended on the client encoding in a significant\r\n> way.\r\n\r\nThanks for your review. I misunderstood the logic of downcase_identifier().\r\nModified the code to support ASCII characters input. \r\n\r\n> Also, as I had previously suspected, your patch treats the completion of\r\n> enum labels in a case-insensitive way (since it all goes through\r\n> _complete_from_query()), but enum labels are not case insensitive. You\r\n> can observe this behavior using this test case:\r\n> \r\n> +check_completion(\"ALTER TYPE enum1 RENAME VALUE 'F\\t\\t\", qr|foo|, \"FIXME\");\r\n> +\r\n> +clear_line();\r\n\r\nYour suspect is correct. I didn't aware enum labels are case sensitive.\r\nI've added this test to the tap tests. \r\n\r\n> You should devise a principled way to communicate to\r\n> _complete_from_query() whether it should do case-sensitive or\r\n> -insensitive completion. We already have COMPLETE_WITH() and\r\n> COMPLETE_WITH_CS() etc. to do this in other cases, so it should be\r\n> straightforward to adapt a similar system.\r\n\r\nI tried to add a flag(casesensitive) in the _complete_from_query().\r\nNow the attached patch passed all the added tap tests.\r\n\r\nRegards,\r\nTang", "msg_date": "Tue, 25 Jan 2022 05:22:32 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 25, 2022 at 05:22:32AM +0000, tanghy.fnst@fujitsu.com wrote:\n> \n> I tried to add a flag(casesensitive) in the _complete_from_query().\n> Now the attached patch passed all the added tap tests.\n\nThanks for updating the patch. When you do so, please check and update the\ncommitfest entry accordingly to make sure that people knows it's ready for\nreview. I'm switching the entry to Needs Review.\n\n\n", "msg_date": "Tue, 25 Jan 2022 17:43:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "I spent some time contemplating my navel about the concerns I raised\nupthread about double-quoted identifiers. I concluded that the reason\nthings don't work well in that area is that we're trying to get all the\nwork done by applying quote_ident() on the backend side and then\nignoring quoting considerations in tab-complete itself. That sort of\nworks, but not terribly well. The currently proposed patch is sticking\na toe into the water of dealing with quoting/downcasing in tab-complete,\nbut we need to go a lot further. I propose that we ought to drop the\nuse of quote_ident() in the tab completion queries altogether, instead\nhaving the backend return names as-is, and doing all the dequoting and\nrequoting work in tab-complete.\n\nAttached is a very-much-WIP patch along these lines. I make no\npretense that it's complete; no doubt some of the individual\nqueries are broken or don't return quite the results we want.\nBut it seems to act the way I think it should for relation names.\n\nOne thing I'm particularly unsure what to do with is the queries\nfor type names, which want to match against the output of\nformat_type, which'll already have applied quote_ident. We can\nprobably hack something up there, but I ran out of time to mess\nwith that for today.\n\nAnyway, I wanted to post this just to see what people think of\ngoing in this direction.\n\n\t\t\tregards, tom lane\n\nPS: I omitted the proposed regression test changes here.\nMany of them are not at all portable --- different versions\nof readline/libedit will produce different control character\nsequences for backspacing, for example. I got a lot of\nfailures when I tried to use those tests with this patch;\nI've not run down which ones are test portability problems,\nwhich are due to intentional behavior changes in this patch,\nand which are due to breakage I've not fixed yet.", "msg_date": "Tue, 25 Jan 2022 18:11:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Tuesday, January 25, 2022 6:44 PM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Thanks for updating the patch. When you do so, please check and update the\n> commitfest entry accordingly to make sure that people knows it's ready for\n> review. I'm switching the entry to Needs Review.\n> \n\nThanks for your reminder. I'll watch out the status change as you suggested.\n\nRegards,\nTang\n\n\n", "msg_date": "Wed, 26 Jan 2022 01:46:01 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "I wrote:\n> I spent some time contemplating my navel about the concerns I raised\n> upthread about double-quoted identifiers. I concluded that the reason\n> things don't work well in that area is that we're trying to get all the\n> work done by applying quote_ident() on the backend side and then\n> ignoring quoting considerations in tab-complete itself. That sort of\n> works, but not terribly well. The currently proposed patch is sticking\n> a toe into the water of dealing with quoting/downcasing in tab-complete,\n> but we need to go a lot further. I propose that we ought to drop the\n> use of quote_ident() in the tab completion queries altogether, instead\n> having the backend return names as-is, and doing all the dequoting and\n> requoting work in tab-complete.\n\nHere's a fleshed-out patch series for this idea.\n\n0001 below is a more evolved version of my previous WIP patch.\nThe main thing that's changed is that I found that it no longer\nworks to handle extra keywords (such as adding COLUMN to the list\nof attribute names after ALTER TABLE tab RENAME) via tab-complete's\ntraditional method of sticking on \"UNION SELECT 'foo'\". That's\nbecause such a result row looks like something that needs to be\ndouble-quoted, which of course makes the completion incorrect.\nSo I've created a side-channel whereby _complete_from_query() can\nreturn some verbatim keywords alongside the actual query results.\n(I think this is a good thing anyway, because the UNION method is\nincredibly wasteful of server cycles. Yeah, I know that these\nqueries only need to run at human speed, but if your server is\nheavily loaded you might still not appreciate the extra cycles.)\n\nHaving done that, I solved the format_type problem by just dropping\nthe use of format_type altogether, and returning only the normal\npg_type.typname entries. The only cases where format_type did\nanything useful for us are the small number of built-in types where\nit substitutes a SQL-mandated name, and we can treat those like\nkeywords (as indeed they are). The list of such types changes\nseldom enough that I don't think it's a huge maintenance burden to\nhave one more place that knows about them.\n\nBTW, I was amused to notice that many of the format_type special cases\ndon't actually work as completions, and never have. For example,\nif we return both \"timestamp with time zone\" and \"timestamp without\ntime zone\", readline can complete as far as \"timestamp with\", but\nfurther completion fails because \"timestamp\" is now seen as a previous\nword that's not part of what's to be completed. So I've dropped those\ncases from the keyword list. Maybe somebody will get interested in\nfiguring a way to make that work, but IMO the cost/benefit ratio for\nsuch effort would be pretty bad.\n\nIncidentally, I found that some of the completion queries were\nintentionally ignoring the given prefix text, with stuff like\n\n/* the silly-looking length condition is just to eat up the current word */\n\" WHERE ... (%d = pg_catalog.length('%s'))\"\n\nI'm not sure why we ever thought that was a good idea, but it\ndefinitely doesn't work anymore, since I removed the filtering\nthat _complete_from_query() used to do on the query results.\nIt's now incumbent on the queries to only return valid matches,\nso I replaced all instances of this pattern with the regular\nsubstring() checks, or even added a substring() check in a\ncouple of queries where there was nothing at all.\n\n0001 takes care of quoting and case-folding issues for the actual\nsubject name of a completion operation, but there's more to do.\nA lot of queries have to reference a previously-entered name\n(for example, ALTER TABLE tab1 DROP COLUMN <TAB> has to find the\ncolumn names of table tab1), and we had variously shoddy code\nfor dealing with those names. Only a few queries even attempted\nto handle schema-qualified names, and none at all of them would\ndowncase unquoted names. So 0002 tries to fix that up, using\nthe same code to parse/downcase/de-quote the name as we would\nuse if it were the subject text.\n\nIt didn't take long to find that the existing methods for this\nwere incredibly tedious, requiring near-duplicate queries\ndepending on whether the previous name was schema-qualified or\nnot. So I've extended the SchemaQuery mechanism to support\nadding qualifications based on an additional name, and now\nwe use that wherever we need a possibly-schema-qualified\nprevious name.\n\nA couple of the existing queries of this sort used WHERE oid\nIN (sub-SELECT), which I didn't see a great way to jam into\nthe SchemaQuery mechanism. What I've done here is to convert\nthose semijoins into plain joins, which might yield multiple\ninstances of wanted names, and then stick DISTINCT onto the\nqueries. It's not very pretty, but it works fine.\n\nIn 0001 and 0002, I left the core of _complete_from_query()\nun-reindented, in hopes of making the actual code changes\nmore readily reviewable. 0003 is just an application of pgindent\nto fix that up and make the finished code legible again.\n\nFinally, 0004 adds some test cases. I'm not too confident about\nhow portable these will be, but I don't think they are making any\nassumptions the existing tests didn't make already. They do pass\nfor me on Linux (readline 7.0) and macOS (Apple's libedit).\n\nThis is sufficiently invasive to tab-complete.c that I'd like to\nget it pushed fairly soon, before that code changes under me.\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 27 Jan 2022 15:23:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Friday, January 28, 2022 5:24 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a fleshed-out patch series for this idea.\n\nThanks for you patch. \nI did some tests on it and here are something cases I feel we need to confirm \nwhether they are suitable.\n\n1) postgres=# create table atest(id int, \"iD\" int, \"ID\" int);\n2) CREATE TABLE\n3) postgres=# alter table atest rename i[TAB]\n4) id \"iD\"\n5) postgres=# alter table atest rename I[TAB]\n6) id \"iD\"\n\nThe tab completion for 5) ignored \"ID\", is that correct?\n\n7) postgres=# create table \"aTest\"(\"myCol\" int, mycol int);\n8) CREATE TABLE\n9) postgres=# alter table a[TAB]\n10) ALL IN TABLESPACE atest \"aTest\"\n11) postgres=# alter table aT[TAB] -> atest\n\nI think what we are trying to do is to ease the burden of typing double quote for user.\nBut in line 11), the tab completion for \"alter table aT[TAB]\" is attest,\nwhich makes the tab completion output of \"aTest\" at 10) no value.\nBecause if user needs to alter table aTest they still needs to \ntype double quote manually.\n\nAnother thing is the inconsistency of the output result.\n12) postgres=# alter table atest rename i[TAB]\n13) id \"iD\"\n14) postgres=# alter table atest rename \"i[TAB]\n15) \"id\" \"iD\"\n\nBy applying the new fix, Line 15 added the output of \"id\".\nI think it's good to keep user input '\"' and convenient when using tab completion.\nOne the other hand, I'm not so comfortable with the output of \"iD\" in line 13.\nIf user doesn't type double quote, why we add double quote to the output?\nCould we make the output of 13) like below?\n12) postgres=# alter table atest rename i[TAB]\n??) id iD\n\nRegards,\nTang\n\n\n\n", "msg_date": "Fri, 28 Jan 2022 06:37:07 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> I did some tests on it and here are something cases I feel we need to confirm \n> whether they are suitable.\n\n> 1) postgres=# create table atest(id int, \"iD\" int, \"ID\" int);\n> 2) CREATE TABLE\n> 3) postgres=# alter table atest rename i[TAB]\n> 4) id \"iD\"\n> 5) postgres=# alter table atest rename I[TAB]\n> 6) id \"iD\"\n\n> The tab completion for 5) ignored \"ID\", is that correct?\n\nPerhaps I misunderstood your original complaint, but what I thought\nyou were unhappy about was that unquoted ID is a legal spelling of\n\"id\" and so I<TAB> ought to be willing to complete that. These\nexamples with case variants of the same word are of some interest,\nbut people aren't really going to create tables with these sorts of\nnames, so we shouldn't let them drive the design IMO.\n\nAnyway, the existing behavior for these examples is\n\nalter table atest rename i<TAB> --- completes immediately to id\nalter table atest rename I<TAB> --- offers nothing\n\nIt's certainly arguable that the first case is right as-is and we\nshouldn't change it. I think that could be handled by tweaking my\npatch so that it wouldn't offer completions that start with a quote\nunless the input word does. That would also cause I<TAB> to complete\nimmediately to id, which is arguably fine.\n\n> I think what we are trying to do is to ease the burden of typing double quote for user.\n\nI'm not thinking about it that way at all. To me, the goal is to make\ntab completion do something sensible when presented with legal variant\nspellings of a word. The two cases where it currently fails to do\nthat are (1) unquoted input that needs to be downcased, and (2) input\nthat is quoted when it doesn't strictly need to be.\n\nTo the extent that we can supply a required quote that the user\nfailed to type, that's fine, but it's not a primary goal of the patch.\nExamples like these make me question whether it's even something we\nwant; it's resulting in extraneous matches that people might find more\nannoying than helpful. Now I *think* that these aren't realistic\ncases and that in real cases adding quotes will be helpful more often\nthan not, but it's debatable.\n\n> One the other hand, I'm not so comfortable with the output of \"iD\" in line 13.\n> If user doesn't type double quote, why we add double quote to the output?\n\nThat's certainly a valid argument.\n\n> Could we make the output of 13) like below?\n> 12) postgres=# alter table atest rename i[TAB]\n> ??) id iD\n\nThat doesn't seem sensible at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:03:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "I wrote:\n> It's certainly arguable that the first case is right as-is and we\n> shouldn't change it. I think that could be handled by tweaking my\n> patch so that it wouldn't offer completions that start with a quote\n> unless the input word does. That would also cause I<TAB> to complete\n> immediately to id, which is arguably fine.\n\nHere's a patch series that does it like that. I have to admit that\nafter playing with it, this is probably better. There's less\nmagic-looking behavior involved, and it lets me drop an ugly hack\nI had to work around a case where Readline didn't want to play along.\n\n0001 also cleans up one oversight in the previous version, which\nis to beware of multibyte characters in parse_identifier(). I'm\nnot sure there is any actual hazard there, since we weren't looking\nfor backslashes, but it's better to be sure. I added the keyword\nhandling I'd left out before, too.\n\n0002-0004 are largely as before.\n\nI've also added 0005, which changes the prefix-matching clauses\nin the SQL queries from \"substring(foo,1,%d)='%s'\" to\n\"foo LIKE '%s'\". This simplifies reading the queries a little bit,\nbut the real reason to do it is that the planner can optimize the\ncatalog searches a lot better. It knows a lot about LIKE prefix\nqueries and exactly nothing about substring(). For example,\nDROP TYPE foo<TAB> now produces a query like this:\n\nexplain SELECT t.typname, NULL::pg_catalog.text FROM pg_catalog.pg_type t WHERE (t.typrelid = 0 OR (SELECT c.relkind = 'c' FROM pg_catalog.pg_class c WHERE c.oid = t.typrelid)) AND t.typname !~ '^_' AND (t.typname) LIKE 'foo%' AND pg_catalog.pg_type_is_visible(t.oid);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using pg_type_typname_nsp_index on pg_type t (cost=0.28..16.63 rows=1 width=96)\n Index Cond: ((typname >= 'foo'::text) AND (typname < 'fop'::text))\n Filter: ((typname !~ '^_'::text) AND (typname ~~ 'foo%'::text) AND pg_type_is_visible(oid) AND ((typrelid = '0'::oid) OR (SubPlan 1)))\n SubPlan 1\n -> Index Scan using pg_class_oid_index on pg_class c (cost=0.28..8.30 rows=1 width=1)\n Index Cond: (oid = t.typrelid)\n(6 rows)\n\nwhere before you got a seqscan:\n\nexplain SELECT pg_catalog.format_type(t.oid, NULL) FROM pg_catalog.pg_type t WHERE (t.typrelid = 0 OR (SELECT c.relkind = 'c' FROM pg_catalog.pg_class c WHERE c.oid = t.typrelid)) AND t.typname !~ '^_' AND substring(pg_catalog.format_type(t.oid, NULL),1,3)='foo' AND pg_catalog.pg_type_is_visible(t.oid);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on pg_type t (cost=0.00..16691.86 rows=1 width=32)\n Filter: ((typname !~ '^_'::text) AND (\"substring\"(format_type(oid, NULL::integer), 1, 3) = 'foo'::text) AND pg_type_is_visible(oid) AND ((typrelid = '0'::oid) OR (SubPlan 1)))\n SubPlan 1\n -> Index Scan using pg_class_oid_index on pg_class c (cost=0.28..8.30 rows=1 width=1)\n Index Cond: (oid = t.typrelid)\n(5 rows)\n\nAgain, while these queries only have to run at human speed, that doesn't\nmean it's okay to be wasteful. I seem to recall hearing complaints that\nthey are noticeably slow in installations with many thousand tables, too.\nThis should help.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 28 Jan 2022 16:25:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "I wrote:\n> [ v15 patch set ]\n\nSigh ... per the cfbot, this was already blindsided by 95787e849.\nAs I said, I don't want to sit on this for very long.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 28 Jan 2022 17:16:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Saturday, January 29, 2022 1:03 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> > I did some tests on it and here are something cases I feel we need to confirm\n> > whether they are suitable.\n> \n> > 1) postgres=# create table atest(id int, \"iD\" int, \"ID\" int);\n> > 2) CREATE TABLE\n> > 3) postgres=# alter table atest rename i[TAB]\n> > 4) id \"iD\"\n> > 5) postgres=# alter table atest rename I[TAB]\n> > 6) id \"iD\"\n> \n> > The tab completion for 5) ignored \"ID\", is that correct?\n> \n> Perhaps I misunderstood your original complaint, but what I thought\n> you were unhappy about was that unquoted ID is a legal spelling of\n> \"id\" and so I<TAB> ought to be willing to complete that. These\n> examples with case variants of the same word are of some interest,\n> but people aren't really going to create tables with these sorts of\n> names, so we shouldn't let them drive the design IMO.\n> \n> Anyway, the existing behavior for these examples is\n> \n> alter table atest rename i<TAB> --- completes immediately to id\n> alter table atest rename I<TAB> --- offers nothing\n> \n> It's certainly arguable that the first case is right as-is and we\n> shouldn't change it. I think that could be handled by tweaking my\n> patch so that it wouldn't offer completions that start with a quote\n> unless the input word does. That would also cause I<TAB> to complete\n> immediately to id, which is arguably fine.\n> \n> > I think what we are trying to do is to ease the burden of typing double quote\n> for user.\n> \n> I'm not thinking about it that way at all. To me, the goal is to make\n> tab completion do something sensible when presented with legal variant\n> spellings of a word. The two cases where it currently fails to do\n> that are (1) unquoted input that needs to be downcased, and (2) input\n> that is quoted when it doesn't strictly need to be.\n> \n> To the extent that we can supply a required quote that the user\n> failed to type, that's fine, but it's not a primary goal of the patch.\n> Examples like these make me question whether it's even something we\n> want; it's resulting in extraneous matches that people might find more\n> annoying than helpful. Now I *think* that these aren't realistic\n> cases and that in real cases adding quotes will be helpful more often\n> than not, but it's debatable.\n> \n> > One the other hand, I'm not so comfortable with the output of \"iD\" in line\n> 13.\n> > If user doesn't type double quote, why we add double quote to the output?\n> \n> That's certainly a valid argument.\n> \n> > Could we make the output of 13) like below?\n> > 12) postgres=# alter table atest rename i[TAB]\n> > ??) id iD\n> \n> That doesn't seem sensible at all.\n\nThanks for your kindly explanation. \nI'm fine with the current tap completion style with your V16 patch.\n\nRegards,\nTang\n\n\n", "msg_date": "Sun, 30 Jan 2022 07:07:17 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Saturday, January 29, 2022 7:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Sigh ... per the cfbot, this was already blindsided by 95787e849.\n> As I said, I don't want to sit on this for very long.\n\nThanks for your V16 patch, I tested it. \nThe results LGTM.\n\nRegards,\nTang\n\n\n", "msg_date": "Sun, 30 Jan 2022 07:13:40 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> Thanks for your V16 patch, I tested it. \n> The results LGTM.\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jan 2022 13:34:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n>> Thanks for your V16 patch, I tested it. \n>> The results LGTM.\n>\n> Pushed, thanks for looking.\n\nI wasn't following this thread, but I noticed a few small potential\nimprovements when I saw the commit.\n\nFirst, as noted in the test, it doesn't preserve the case of the input\nfor keywords appended to the query result. This is easily fixed by\nusing `pg_strdup_keyword_case()`, per the first attached patch.\n\nThe second might be more of a matter of style or opinion, but I noticed\na bunch of `if (foo) free(foo);`, which is redundant given that\n`free(NULL)` is a no-op. To simplify the code further, I also made\n`escape_string(NULL)` be a no-op, returning `NULL`.\n\n- ilmari", "msg_date": "Mon, 31 Jan 2022 01:28:45 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> First, as noted in the test, it doesn't preserve the case of the input\n> for keywords appended to the query result. This is easily fixed by\n> using `pg_strdup_keyword_case()`, per the first attached patch.\n\nI thought about that, and intentionally didn't do it, because it\nwould also affect the menus produced by tab completion. Currently,\nkeywords are (usually) visually distinct from non-keywords in those\nmenus, thanks to being upper-case where the object names usually\naren't:\n\nregression=# create table foo (c1 int, c2 int); \nCREATE TABLE\nregression=# alter table foo rename c<TAB>\nc1 c2 COLUMN CONSTRAINT \n\nWith this change, the keywords would be visually indistinguishable\nfrom the object names, which I felt wouldn't be a net improvement.\n\nWe could do something hacky like matching case only when there's\nno longer any matching object names, but that might be too magic.\n\n> The second might be more of a matter of style or opinion, but I noticed\n> a bunch of `if (foo) free(foo);`, which is redundant given that\n> `free(NULL)` is a no-op. To simplify the code further, I also made\n> `escape_string(NULL)` be a no-op, returning `NULL`.\n\nYeah. Our fairly longstanding convention is to avoid doing\nfree(NULL), dating back to when some platforms would crash on it.\nI realize that's archaic now, but I'm not inclined to change\nit in just some places.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jan 2022 20:54:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "I wrote:\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> First, as noted in the test, it doesn't preserve the case of the input\n>> for keywords appended to the query result. This is easily fixed by\n>> using `pg_strdup_keyword_case()`, per the first attached patch.\n\n> I thought about that, and intentionally didn't do it, because it\n> would also affect the menus produced by tab completion.\n> ...\n> We could do something hacky like matching case only when there's\n> no longer any matching object names, but that might be too magic.\n\nI experimented with that, and it actually doesn't seem as weird\nas I feared. See if you like this ...\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 31 Jan 2022 16:39:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I wrote:\n>> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>>> First, as noted in the test, it doesn't preserve the case of the input\n>>> for keywords appended to the query result. This is easily fixed by\n>>> using `pg_strdup_keyword_case()`, per the first attached patch.\n>\n>> I thought about that, and intentionally didn't do it, because it\n>> would also affect the menus produced by tab completion.\n>> ...\n>> We could do something hacky like matching case only when there's\n>> no longer any matching object names, but that might be too magic.\n>\n> I experimented with that, and it actually doesn't seem as weird\n> as I feared. See if you like this ...\n\nThat's a reasonable compromise, and the implementation is indeed less\nhacky than one might have feared. Although I think putting the\n`num_keywords` variable before `num_other` would read better.\n\nGoing through the uses of COMPLETE_WITH(_SCHEMA)_QUERY_PLUS, I noticed a\nfew that had the keywords in lower case, which is fixed in the attached\npatch (except the hardcoded data types, which aren't really keywords).\nWhile I was there, I also added completion of \"AUTHORIZATION\" after\n\"SHOW SESSSION\", which is necessary since there are variables starting\nwith \"session_\".\n\n> \t\t\tregards, tom lane\n\nCheers,\n- ilmari", "msg_date": "Tue, 01 Feb 2022 13:15:45 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>>> We could do something hacky like matching case only when there's\n>>> no longer any matching object names, but that might be too magic.\n>> I experimented with that, and it actually doesn't seem as weird\n>> as I feared. See if you like this ...\n\n> That's a reasonable compromise, and the implementation is indeed less\n> hacky than one might have feared. Although I think putting the\n> `num_keywords` variable before `num_other` would read better.\n\nHm... I renamed \"num_other\" to \"num_query_other\" instead.\n\n> Going through the uses of COMPLETE_WITH(_SCHEMA)_QUERY_PLUS, I noticed a\n> few that had the keywords in lower case, which is fixed in the attached\n> patch (except the hardcoded data types, which aren't really keywords).\n\nYeah, my oversight. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 01 Feb 2022 17:06:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>>> We could do something hacky like matching case only when there's\n>>> no longer any matching object names, but that might be too magic.\n\n>> I experimented with that, and it actually doesn't seem as weird\n>> as I feared. See if you like this ...\n\n> That's a reasonable compromise, and the implementation is indeed less\n> hacky than one might have feared. Although I think putting the\n> `num_keywords` variable before `num_other` would read better.\n\nAfter a few days of using that, I'm having second thoughts about it,\nbecause it turns out to impede completion in common cases. For\nexample,\n\nregression=# set transa<TAB><TAB>\nTRANSACTION transaction_isolation \ntransaction_deferrable transaction_read_only \n\nIt won't fill in \"ction\" because of the case discrepancy between the\noffered alternatives. Maybe this trumps the question of whether you\nshould be able to distinguish keywords from non-keywords in the menus.\nIf we case-folded the keywords as per your original proposal, it'd do\nwhat I expect it to.\n\nIn previous releases, this worked as expected: \"set transa<TAB>\"\nimmediately completes \"ction\", and then tabbing produces this\nmenu:\n\ntransaction transaction_isolation \ntransaction_deferrable transaction_read_only \n\nThat probably explains why these keywords were lower-cased in\nthe previous code. However, I don't think we should blame\nyour suggestion to upcase them, because the same problem arises\nin other completion contexts where we offer keywords. We should\nsolve it across-the-board not just for these specific queries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 18:41:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" }, { "msg_contents": "On Monday, January 31, 2022 3:35 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> > Thanks for your V16 patch, I tested it.\n> > The results LGTM.\n> \n> Pushed, thanks for looking.\n\nI think 02b8048 forgot to free some used memory. \nAttached a tiny patch to fix it. Please have a check.\n\nRegards,\nTang", "msg_date": "Fri, 22 Jul 2022 08:30:48 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Support tab completion for upper character inputs in psql" }, { "msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> I think 02b8048 forgot to free some used memory. \n> Attached a tiny patch to fix it. Please have a check.\n\nRight you are. Inspired by that, I tried running some tab-completion\noperations under valgrind, and found another nearby leak in\npatternToSQLRegex. Fixes pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Jul 2022 10:55:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support tab completion for upper character inputs in psql" } ]
[ { "msg_contents": "Hi,\n\nA particular useful feature of jsonb arrays,\nis the ability to represent multidimensional arrays without matching dimensions,\nwhich is not possible with normal PostgreSQL arrays.\n\nSELECT array[[5,2],1,[8,[3,2],6]];\nERROR: multidimensional arrays must have array expressions with matching dimensions\n\nSELECT '[[5,2],1,[8,[3,2],6]]'::jsonb;\n[[5, 2], 1, [8, [3, 2], 6]]\n\nWhen working with jsonb array structures,\nthere is already jsonb_array_elements() to expand the top-level.\n\nAnother case that I think is common is wanting to expand all levels, not just the top-level.\n\nMaybe it's common enough to motivate a new param:\n\n jsonb_array_elements(from_json jsonb [, recursive boolean ])\n\nOr as a separate function. Below is a PoC in PL/pgSQL:\n\nCREATE OR REPLACE FUNCTION jsonb_array_elements_recursive(from_json jsonb, OUT value jsonb)\nRETURNS SETOF jsonb\nLANGUAGE plpgsql\nAS $$\nBEGIN\nFOR value IN SELECT jsonb_array_elements(from_json) LOOP\n IF jsonb_typeof(value) <> 'array' THEN\n RETURN NEXT;\n ELSE\n RETURN QUERY\n SELECT * FROM jsonb_array_elements_recursive(value);\n END IF;\nEND LOOP;\nEND\n$$;\n\n# SELECT * FROM jsonb_array_elements_recursive('[[5, 2], 1, [8, [3, 2], 6]]'::jsonb);\nvalue\n-------\n5\n2\n1\n8\n3\n2\n6\n(7 rows)\n\nI tried but failed to implement a PoC in pure SQL,\nnot even using the new CTE SEARCH functionality,\nbut maybe it's possible somehow.\n\n/Joel\nHi,A particular useful feature of jsonb arrays,is the ability to represent multidimensional arrays without matching dimensions,which is not possible with normal PostgreSQL arrays.SELECT array[[5,2],1,[8,[3,2],6]];ERROR:  multidimensional arrays must have array expressions with matching dimensionsSELECT '[[5,2],1,[8,[3,2],6]]'::jsonb;[[5, 2], 1, [8, [3, 2], 6]]When working with jsonb array structures,there is already jsonb_array_elements() to expand the top-level.Another case that I think is common is wanting to expand all levels, not just the top-level.Maybe it's common enough to motivate a new param:   jsonb_array_elements(from_json jsonb [, recursive boolean ])Or as a separate function. Below is a PoC in PL/pgSQL:CREATE OR REPLACE FUNCTION jsonb_array_elements_recursive(from_json jsonb, OUT value jsonb)RETURNS SETOF jsonbLANGUAGE plpgsqlAS $$BEGINFOR value IN SELECT jsonb_array_elements(from_json) LOOP  IF jsonb_typeof(value) <> 'array' THEN    RETURN NEXT;  ELSE    RETURN QUERY    SELECT * FROM jsonb_array_elements_recursive(value);  END IF;END LOOP;END$$;# SELECT * FROM jsonb_array_elements_recursive('[[5, 2], 1, [8, [3, 2], 6]]'::jsonb);value-------5218326(7 rows)I tried but failed to implement a PoC in pure SQL,not even using the new CTE SEARCH functionality,but maybe it's possible somehow./Joel", "msg_date": "Sun, 07 Feb 2021 10:54:51 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "jsonb_array_elements_recursive()" }, { "msg_contents": "Having thought about this some more,\nthe function name should of course be jsonb_unnest(),\nsimilar to how unnest() works for normal arrays:\n\nSELECT unnest(array[[3,2],[1,4]]);\nunnest\n--------\n 3\n 2\n 1\n 4\n(4 rows)\n\nSELECT jsonb_unnest('[[3,2],[1,4]]'::jsonb);\njsonb_unnest\n--------------------\n3\n2\n1\n4\n(4 rows)\n\nThoughts?\nHaving thought about this some more,the function name should of course be jsonb_unnest(),similar to how unnest() works for normal arrays:SELECT unnest(array[[3,2],[1,4]]);unnest--------      3      2      1      4(4 rows)SELECT jsonb_unnest('[[3,2],[1,4]]'::jsonb);jsonb_unnest--------------------3214(4 rows)Thoughts?", "msg_date": "Sun, 07 Feb 2021 16:59:04 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "Hi\n\nne 7. 2. 2021 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> Having thought about this some more,\n> the function name should of course be jsonb_unnest(),\n> similar to how unnest() works for normal arrays:\n>\n> SELECT unnest(array[[3,2],[1,4]]);\n> unnest\n> --------\n> 3\n> 2\n> 1\n> 4\n> (4 rows)\n>\n> SELECT jsonb_unnest('[[3,2],[1,4]]'::jsonb);\n> jsonb_unnest\n> --------------------\n> 3\n> 2\n> 1\n> 4\n> (4 rows)\n>\n> Thoughts?\n>\n\nIt has sense. Maybe it should return two columns - first path to value,\nand second with value. It can be used like some \"reader\"\n\nRegards\n\nPavel\n\nHine 7. 2. 2021 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:Having thought about this some more,the function name should of course be jsonb_unnest(),similar to how unnest() works for normal arrays:SELECT unnest(array[[3,2],[1,4]]);unnest--------      3      2      1      4(4 rows)SELECT jsonb_unnest('[[3,2],[1,4]]'::jsonb);jsonb_unnest--------------------3214(4 rows)Thoughts?It  has  sense. Maybe it should return two columns - first path to value, and second with value. It can be used like some \"reader\"RegardsPavel", "msg_date": "Sun, 7 Feb 2021 17:08:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "On Sun, Feb 7, 2021, at 17:08, Pavel Stehule wrote:\n>>ne 7. 2. 2021 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>>\n>>SELECT jsonb_unnest('[[3,2],[1,4]]'::jsonb);\n>>jsonb_unnest\n>>--------------------\n>>3\n>>2\n>>1\n>>4\n>>(4 rows)\n>\n>It has sense. Maybe it should return two columns - first path to value, and second with value. It can be used like some >\"reader\"\n\nThanks for thinking about this.\n\nI would expect jsonb_unnest() to have the same semantics as unnest(), but returning SETOF jsonb.\n\njsonb_unnest() implemented in C would of course be much more performant than the PL/pgSQL PoC.\nAnd I think performance could be important for such a function,\nso I think we should be careful adding extra complexity to such a function,\nunless it can be demonstrated it is needed for a majority of cases.\n\n/Joel\nOn Sun, Feb 7, 2021, at 17:08, Pavel Stehule wrote:>>ne 7. 2. 2021 v 16:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:>>>>SELECT jsonb_unnest('[[3,2],[1,4]]'::jsonb);>>jsonb_unnest>>-------------------->>3>>2>>1>>4>>(4 rows)>>It  has  sense. Maybe it should return two columns - first path to value, and second with value. It can be used like some >\"reader\"Thanks for thinking about this.I would expect jsonb_unnest() to have the same semantics as unnest(), but returning SETOF jsonb.jsonb_unnest() implemented in C would of course be much more performant than the PL/pgSQL PoC.And I think performance could be important for such a function,so I think we should be careful adding extra complexity to such a function,unless it can be demonstrated it is needed for a majority of cases./Joel", "msg_date": "Sun, 07 Feb 2021 17:27:01 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Having thought about this some more,\n> the function name should of course be jsonb_unnest(),\n> similar to how unnest() works for normal arrays:\n\nWhy not just unnest(), then?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Feb 2021 11:27:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "On Sun, Feb 7, 2021, at 17:27, Tom Lane wrote:\n>\"Joel Jacobson\" <joel@compiler.org> writes:\n>> Having thought about this some more,\n>> the function name should of course be jsonb_unnest(),\n>> similar to how unnest() works for normal arrays:\n>\n>Why not just unnest(), then?\n>\n>regards, tom lane\n\nAhh, of course! I totally forgot about function overloading when thinking about this.\n\n+1\n\n/Joel\nOn Sun, Feb 7, 2021, at 17:27, Tom Lane wrote:>\"Joel Jacobson\" <joel@compiler.org> writes:>> Having thought about this some more,>> the function name should of course be jsonb_unnest(),>> similar to how unnest() works for normal arrays:>>Why not just unnest(), then?>>regards, tom laneAhh, of course! I totally forgot about function overloading when thinking about this.+1/Joel", "msg_date": "Sun, 07 Feb 2021 17:31:13 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "Hi,\n# SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n jsonb\n-------------------------------\n [[5, 2], \"a\", [8, [3, 2], 6]]\n(1 row)\n\nunnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n\nWould the enhanced unnest accept the above array ?\n\nCheers\n\nOn Sun, Feb 7, 2021 at 8:31 AM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Sun, Feb 7, 2021, at 17:27, Tom Lane wrote:\n> >\"Joel Jacobson\" <joel@compiler.org> writes:\n> >> Having thought about this some more,\n> >> the function name should of course be jsonb_unnest(),\n> >> similar to how unnest() works for normal arrays:\n> >\n> >Why not just unnest(), then?\n> >\n> >regards, tom lane\n>\n> Ahh, of course! I totally forgot about function overloading when thinking\n> about this.\n>\n> +1\n>\n> /Joel\n>\n\nHi,# SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;             jsonb------------------------------- [[5, 2], \"a\", [8, [3, 2], 6]](1 row)unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.Would the enhanced unnest accept the above array ?CheersOn Sun, Feb 7, 2021 at 8:31 AM Joel Jacobson <joel@compiler.org> wrote:On Sun, Feb 7, 2021, at 17:27, Tom Lane wrote:>\"Joel Jacobson\" <joel@compiler.org> writes:>> Having thought about this some more,>> the function name should of course be jsonb_unnest(),>> similar to how unnest() works for normal arrays:>>Why not just unnest(), then?>>regards, tom laneAhh, of course! I totally forgot about function overloading when thinking about this.+1/Joel", "msg_date": "Sun, 7 Feb 2021 09:33:29 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "Hi\n\nne 7. 2. 2021 v 18:31 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n\n> Hi,\n> # SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n> jsonb\n> -------------------------------\n> [[5, 2], \"a\", [8, [3, 2], 6]]\n> (1 row)\n>\n> unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n>\n> Would the enhanced unnest accept the above array ?\n>\n\nThere should be a special overwrite for json type. Json can hold an array,\nbut from Postgres perspective, it is not an array.\n\nBut there is really one specific case. We can have an array of json(b), and\ninside there should be other arrays. So nesting can be across values.\n\nRegards\n\nPavel\n\n\n\n>\n> Cheers\n>\n> On Sun, Feb 7, 2021 at 8:31 AM Joel Jacobson <joel@compiler.org> wrote:\n>\n>> On Sun, Feb 7, 2021, at 17:27, Tom Lane wrote:\n>> >\"Joel Jacobson\" <joel@compiler.org> writes:\n>> >> Having thought about this some more,\n>> >> the function name should of course be jsonb_unnest(),\n>> >> similar to how unnest() works for normal arrays:\n>> >\n>> >Why not just unnest(), then?\n>> >\n>> >regards, tom lane\n>>\n>> Ahh, of course! I totally forgot about function overloading when thinking\n>> about this.\n>>\n>> +1\n>>\n>> /Joel\n>>\n>\n\nHine 7. 2. 2021 v 18:31 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,# SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;             jsonb------------------------------- [[5, 2], \"a\", [8, [3, 2], 6]](1 row)unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.Would the enhanced unnest accept the above array ?There should be a special overwrite for json type. Json can hold an array, but from Postgres perspective, it is not an array.But there is really one specific case. We can have an array of json(b), and inside there should be other arrays. So nesting can be across values.RegardsPavel CheersOn Sun, Feb 7, 2021 at 8:31 AM Joel Jacobson <joel@compiler.org> wrote:On Sun, Feb 7, 2021, at 17:27, Tom Lane wrote:>\"Joel Jacobson\" <joel@compiler.org> writes:>> Having thought about this some more,>> the function name should of course be jsonb_unnest(),>> similar to how unnest() works for normal arrays:>>Why not just unnest(), then?>>regards, tom laneAhh, of course! I totally forgot about function overloading when thinking about this.+1/Joel", "msg_date": "Sun, 7 Feb 2021 18:35:51 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "On Sunday, February 7, 2021, Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> # SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n> jsonb\n> -------------------------------\n> [[5, 2], \"a\", [8, [3, 2], 6]]\n> (1 row)\n>\n> unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n>\n> Would the enhanced unnest accept the above array ?\n>\n\nIts not possible to even create that sql array so whether the unnest\nfunction could do something useful with it is immaterial.\n\nDavid J.\n\nOn Sunday, February 7, 2021, Zhihong Yu <zyu@yugabyte.com> wrote:Hi,# SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;             jsonb------------------------------- [[5, 2], \"a\", [8, [3, 2], 6]](1 row)unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.Would the enhanced unnest accept the above array ?Its not possible to even create that sql array so whether the unnest function could do something useful with it is immaterial.David J.", "msg_date": "Sun, 7 Feb 2021 10:37:51 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:\n>Hi,\n># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n> jsonb\n>-------------------------------\n> [[5, 2], \"a\", [8, [3, 2], 6]]\n>(1 row)\n>\n>unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n>\n>Would the enhanced unnest accept the above array ?\n>\n>Cheers\n\nYes, but only if the overloaded jsonb version of unnest() exists,\nand only if it's a jsonb array, not a normal array, like Pavel explained.\n\nYour example using a PoC PL/pgSQL:\n\nCREATE FUNCTION unnest(jsonb)\nRETURNS SETOF jsonb\nLANGUAGE plpgsql\nAS $$\nDECLARE\nvalue jsonb;\nBEGIN\nFOR value IN SELECT jsonb_array_elements($1) LOOP\n IF jsonb_typeof(value) <> 'array' THEN\n RETURN NEXT value;\n ELSE\n RETURN QUERY\n SELECT pit.jsonb_array_elements_recursive(value);\n END IF;\nEND LOOP;\nEND\n$$;\n\nSELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\nunnest\n--------\n5\n2\n\"a\"\n8\n3\n2\n6\n(7 rows)\n\nCheers,\n\n/Joel\nOn Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:>Hi,># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;>             jsonb>-------------------------------> [[5, 2], \"a\", [8, [3, 2], 6]]>(1 row)>>unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.>>Would the enhanced unnest accept the above array ?>>CheersYes, but only if the overloaded jsonb version of unnest() exists,and only if it's a jsonb array, not a normal array, like Pavel explained.Your example using a PoC PL/pgSQL:CREATE FUNCTION unnest(jsonb)RETURNS SETOF jsonbLANGUAGE plpgsqlAS $$DECLAREvalue jsonb;BEGINFOR value IN SELECT jsonb_array_elements($1) LOOP  IF jsonb_typeof(value) <> 'array' THEN    RETURN NEXT value;  ELSE    RETURN QUERY    SELECT pit.jsonb_array_elements_recursive(value);  END IF;END LOOP;END$$;SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);unnest--------52\"a\"8326(7 rows)Cheers,/Joel", "msg_date": "Sun, 07 Feb 2021 18:42:33 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "On Sun, Feb 7, 2021, at 18:42, Joel Jacobson wrote:\n> SELECT pit.jsonb_array_elements_recursive(value);\n\nSorry, that line should have been:\n\n SELECT unnest(value);\n\n\n\nOn Sun, Feb 7, 2021, at 18:42, Joel Jacobson wrote:>    SELECT pit.jsonb_array_elements_recursive(value);Sorry, that line should have been:    SELECT unnest(value);", "msg_date": "Sun, 07 Feb 2021 18:43:29 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "ne 7. 2. 2021 v 18:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:\n> >Hi,\n> ># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n> > jsonb\n> >-------------------------------\n> > [[5, 2], \"a\", [8, [3, 2], 6]]\n> >(1 row)\n> >\n> >unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n> >\n> >Would the enhanced unnest accept the above array ?\n> >\n> >Cheers\n>\n> Yes, but only if the overloaded jsonb version of unnest() exists,\n> and only if it's a jsonb array, not a normal array, like Pavel explained.\n>\n> Your example using a PoC PL/pgSQL:\n>\n> CREATE FUNCTION unnest(jsonb)\n> RETURNS SETOF jsonb\n> LANGUAGE plpgsql\n> AS $$\n> DECLARE\n> value jsonb;\n> BEGIN\n> FOR value IN SELECT jsonb_array_elements($1) LOOP\n> IF jsonb_typeof(value) <> 'array' THEN\n> RETURN NEXT value;\n> ELSE\n> RETURN QUERY\n> SELECT pit.jsonb_array_elements_recursive(value);\n> END IF;\n> END LOOP;\n> END\n> $$;\n>\n> SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\n> unnest\n> --------\n> 5\n> 2\n> \"a\"\n> 8\n> 3\n> 2\n> 6\n> (7 rows)\n>\n> Cheers,\n>\n\njust note - isn't it possible to use \"not committed yet\" function\njson_table instead?\n\nhttps://commitfest.postgresql.org/32/2902/\n\nI understand your request - but I am afraid so we are opening a Pandora box\na little bit. There is a possible collision between Postgres first class\narrays and non atomic types. I am not sure if a functional API is enough to\ncover all valuable cases. The functional API is limited and if we cross\nsome borders, we can get more often errors of type FUNCLOOKUP_AMBIGUOUS. So\nif proposed functionality can be implemented by ANSI/SQL dedicated\nfunction, then it can be better. Second possibility is enhancing the\nPLpgSQL FOREACH statement. There we have more possibilities to design\nnecessary syntax, and we don't need to solve possible problems with\nhandling ambiguous overloaded functions. I don't afraid of semantics. The\nproblems can be in parser in function lookup.\n\nSemantically - now the types can support a subscripting interface. There\ncan be some similarity for type's iterators over nested fields.\n\nRegards\n\nPavel\n\n\n\n> /Joel\n>\n\nne 7. 2. 2021 v 18:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:>Hi,># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;>             jsonb>-------------------------------> [[5, 2], \"a\", [8, [3, 2], 6]]>(1 row)>>unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.>>Would the enhanced unnest accept the above array ?>>CheersYes, but only if the overloaded jsonb version of unnest() exists,and only if it's a jsonb array, not a normal array, like Pavel explained.Your example using a PoC PL/pgSQL:CREATE FUNCTION unnest(jsonb)RETURNS SETOF jsonbLANGUAGE plpgsqlAS $$DECLAREvalue jsonb;BEGINFOR value IN SELECT jsonb_array_elements($1) LOOP  IF jsonb_typeof(value) <> 'array' THEN    RETURN NEXT value;  ELSE    RETURN QUERY    SELECT pit.jsonb_array_elements_recursive(value);  END IF;END LOOP;END$$;SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);unnest--------52\"a\"8326(7 rows)Cheers,just note - isn't it possible to use \"not committed yet\" function json_table instead? https://commitfest.postgresql.org/32/2902/I understand your request - but I am afraid so we are opening a Pandora box a little bit. There is a possible collision between Postgres first class arrays and non atomic types. I am not sure if a functional API is enough to cover all valuable cases. The functional API is limited and if we cross some borders, we can get more often errors of type FUNCLOOKUP_AMBIGUOUS. So if proposed functionality can be implemented by ANSI/SQL dedicated function, then it can be better. Second possibility is enhancing the PLpgSQL FOREACH statement. There we have more possibilities to design necessary syntax, and we don't need to solve possible problems with handling ambiguous  overloaded functions. I don't afraid of semantics. The problems can be in parser in function lookup.Semantically - now the types can support a subscripting interface. There can be some similarity for type's iterators over nested fields. RegardsPavel/Joel", "msg_date": "Sun, 7 Feb 2021 19:06:44 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "Hi,\n\nbq. SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\n\nSince the array without cast is not normal array (and would be rejected), I\nwonder if the cast is needed.\nBecause casting to jsonb is the only legitimate interpretation here.\n\nCheers\n\nOn Sun, Feb 7, 2021 at 9:42 AM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:\n> >Hi,\n> ># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n> > jsonb\n> >-------------------------------\n> > [[5, 2], \"a\", [8, [3, 2], 6]]\n> >(1 row)\n> >\n> >unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n> >\n> >Would the enhanced unnest accept the above array ?\n> >\n> >Cheers\n>\n> Yes, but only if the overloaded jsonb version of unnest() exists,\n> and only if it's a jsonb array, not a normal array, like Pavel explained.\n>\n> Your example using a PoC PL/pgSQL:\n>\n> CREATE FUNCTION unnest(jsonb)\n> RETURNS SETOF jsonb\n> LANGUAGE plpgsql\n> AS $$\n> DECLARE\n> value jsonb;\n> BEGIN\n> FOR value IN SELECT jsonb_array_elements($1) LOOP\n> IF jsonb_typeof(value) <> 'array' THEN\n> RETURN NEXT value;\n> ELSE\n> RETURN QUERY\n> SELECT pit.jsonb_array_elements_recursive(value);\n> END IF;\n> END LOOP;\n> END\n> $$;\n>\n> SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\n> unnest\n> --------\n> 5\n> 2\n> \"a\"\n> 8\n> 3\n> 2\n> 6\n> (7 rows)\n>\n> Cheers,\n>\n> /Joel\n>\n\nHi,bq. SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);Since the array without cast is not normal array (and would be rejected), I wonder if the cast is needed.Because casting to jsonb is the only legitimate interpretation here.CheersOn Sun, Feb 7, 2021 at 9:42 AM Joel Jacobson <joel@compiler.org> wrote:On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:>Hi,># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;>             jsonb>-------------------------------> [[5, 2], \"a\", [8, [3, 2], 6]]>(1 row)>>unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.>>Would the enhanced unnest accept the above array ?>>CheersYes, but only if the overloaded jsonb version of unnest() exists,and only if it's a jsonb array, not a normal array, like Pavel explained.Your example using a PoC PL/pgSQL:CREATE FUNCTION unnest(jsonb)RETURNS SETOF jsonbLANGUAGE plpgsqlAS $$DECLAREvalue jsonb;BEGINFOR value IN SELECT jsonb_array_elements($1) LOOP  IF jsonb_typeof(value) <> 'array' THEN    RETURN NEXT value;  ELSE    RETURN QUERY    SELECT pit.jsonb_array_elements_recursive(value);  END IF;END LOOP;END$$;SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);unnest--------52\"a\"8326(7 rows)Cheers,/Joel", "msg_date": "Sun, 7 Feb 2021 10:20:29 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "ne 7. 2. 2021 v 19:18 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n\n> Hi,\n>\n> bq. SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\n>\n> Since the array without cast is not normal array (and would be rejected),\n> I wonder if the cast is needed.\n> Because casting to jsonb is the only legitimate interpretation here.\n>\n\nonly until somebody does support for hstore, xml, ... some future data type\n\nMinimally now, we have json, jsonb types.\n\nRegards\n\nPavel\n\n>\n> Cheers\n>\n> On Sun, Feb 7, 2021 at 9:42 AM Joel Jacobson <joel@compiler.org> wrote:\n>\n>> On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:\n>> >Hi,\n>> ># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;\n>> > jsonb\n>> >-------------------------------\n>> > [[5, 2], \"a\", [8, [3, 2], 6]]\n>> >(1 row)\n>> >\n>> >unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.\n>> >\n>> >Would the enhanced unnest accept the above array ?\n>> >\n>> >Cheers\n>>\n>> Yes, but only if the overloaded jsonb version of unnest() exists,\n>> and only if it's a jsonb array, not a normal array, like Pavel explained.\n>>\n>> Your example using a PoC PL/pgSQL:\n>>\n>> CREATE FUNCTION unnest(jsonb)\n>> RETURNS SETOF jsonb\n>> LANGUAGE plpgsql\n>> AS $$\n>> DECLARE\n>> value jsonb;\n>> BEGIN\n>> FOR value IN SELECT jsonb_array_elements($1) LOOP\n>> IF jsonb_typeof(value) <> 'array' THEN\n>> RETURN NEXT value;\n>> ELSE\n>> RETURN QUERY\n>> SELECT pit.jsonb_array_elements_recursive(value);\n>> END IF;\n>> END LOOP;\n>> END\n>> $$;\n>>\n>> SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\n>> unnest\n>> --------\n>> 5\n>> 2\n>> \"a\"\n>> 8\n>> 3\n>> 2\n>> 6\n>> (7 rows)\n>>\n>> Cheers,\n>>\n>> /Joel\n>>\n>\n\nne 7. 2. 2021 v 19:18 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,bq. SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);Since the array without cast is not normal array (and would be rejected), I wonder if the cast is needed.Because casting to jsonb is the only legitimate interpretation here.only until somebody does support for hstore, xml, ... some future data typeMinimally now, we have json, jsonb types.RegardsPavelCheersOn Sun, Feb 7, 2021 at 9:42 AM Joel Jacobson <joel@compiler.org> wrote:On Sun, Feb 7, 2021, at 18:33, Zhihong Yu wrote:>Hi,># SELECT '[[5,2],\"a\",[8,[3,2],6]]'::jsonb;>             jsonb>-------------------------------> [[5, 2], \"a\", [8, [3, 2], 6]]>(1 row)>>unnest(array[[3,2],\"a\",[1,4]]) is not accepted currently.>>Would the enhanced unnest accept the above array ?>>CheersYes, but only if the overloaded jsonb version of unnest() exists,and only if it's a jsonb array, not a normal array, like Pavel explained.Your example using a PoC PL/pgSQL:CREATE FUNCTION unnest(jsonb)RETURNS SETOF jsonbLANGUAGE plpgsqlAS $$DECLAREvalue jsonb;BEGINFOR value IN SELECT jsonb_array_elements($1) LOOP  IF jsonb_typeof(value) <> 'array' THEN    RETURN NEXT value;  ELSE    RETURN QUERY    SELECT pit.jsonb_array_elements_recursive(value);  END IF;END LOOP;END$$;SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);unnest--------52\"a\"8326(7 rows)Cheers,/Joel", "msg_date": "Sun, 7 Feb 2021 19:38:39 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" }, { "msg_contents": "On Sun, Feb 7, 2021 at 11:39 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> ne 7. 2. 2021 v 19:18 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n>\n>> Hi,\n>>\n>> bq. SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);\n>>\n>> Since the array without cast is not normal array (and would be rejected),\n>> I wonder if the cast is needed.\n>> Because casting to jsonb is the only legitimate interpretation here.\n>>\n>\n> only until somebody does support for hstore, xml, ... some future data type\n>\n> Minimally now, we have json, jsonb types.\n>\n>\nMore generally, a sequence of characters has no meaning to the system\nunless and until an externally supplied type is given to it allowing it to\ninterpret the sequence of characters in some concrete way. The system will\nnever assign a concrete type to some random sequence of characters based\nupon what those characters are. Forgive the idiom, but to do otherwise\nwould be putting the cart before the horse. It would also be quite\nexpensive and prone to, as above, different types deciding on the same\ntextual representation being valid input to each.\n\nDavid J.\n\nOn Sun, Feb 7, 2021 at 11:39 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:ne 7. 2. 2021 v 19:18 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,bq. SELECT unnest('[[5,2],\"a\",[8,[3,2],6]]'::jsonb);Since the array without cast is not normal array (and would be rejected), I wonder if the cast is needed.Because casting to jsonb is the only legitimate interpretation here.only until somebody does support for hstore, xml, ... some future data typeMinimally now, we have json, jsonb types.More generally, a sequence of characters has no meaning to the system unless and until an externally supplied type is given to it allowing it to interpret the sequence of characters in some concrete way.  The system will never assign a concrete type to some random sequence of characters based upon what those characters are.  Forgive the idiom, but to do otherwise would be putting the cart before the horse.  It would also be quite expensive and prone to, as above, different types deciding on the same textual representation being valid input to each.David J.", "msg_date": "Sun, 7 Feb 2021 12:09:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: jsonb_array_elements_recursive()" } ]
[ { "msg_contents": "Amit, Ajin, hackers,\n\ntesting logical decoding for two-phase transactions, I stumbled over \nwhat I first thought is a bug. But comments seems to indicate this is \nintended behavior. Could you please clarify or elaborate on the design \ndecision? Or indicate this indeed is a bug?\n\nWhat puzzled me is that if a decoder is restarted in between the PREPARE \nand the COMMIT PREPARED, it repeats the entire transaction, despite it \nbeing already sent and potentially prepared on the receiving side.\n\nIn terms of `pg_logical_slot_get_changes` (and roughly from the \nprepare.sql test), this looks as follows:\n\n data\n----------------------------------------------------\n BEGIN\n table public.test_prepared1: INSERT: id[integer]:1\n PREPARE TRANSACTION 'test_prepared#1'\n(3 rows)\n\n\nThis is the first delivery of the transaction. After a restart, it will \nget all of the changes again, though:\n\n\n data\n----------------------------------------------------\n BEGIN\n table public.test_prepared1: INSERT: id[integer]:1\n PREPARE TRANSACTION 'test_prepared#1'\n COMMIT PREPARED 'test_prepared#1'\n(4 rows)\n\n\nI did not expect this, as any receiver that wants to have decoded 2PC is \nlikely supporting some kind of two-phase commits itself. And would \ntherefore prepare the transaction upon its first reception. Potentially \nreceiving it a second time would require complicated filtering on every \nprepared transaction.\n\nFurthermore, this clearly and unnecessarily holds back the restart LSN. \nMeaning even just a single prepared transaction can block advancing the \nrestart LSN. In most cases, these are short lived. But on the other \nhand, there may be an arbitrary amount of other transactions in between \na PREPARE and the corresponding COMMIT PREPARED in the WAL. Not being \nable to advance over a prepared transaction seems like a bad thing in \nsuch a case.\n\nI fail to see where this repetition would ever be useful. Is there any \nreason for the current implementation that I'm missing or can this be \ncorrected? Thanks for elaborating.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 8 Feb 2021 09:31:12 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 8, 2021 at 2:01 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> Amit, Ajin, hackers,\n>\n> testing logical decoding for two-phase transactions, I stumbled over\n> what I first thought is a bug. But comments seems to indicate this is\n> intended behavior. Could you please clarify or elaborate on the design\n> decision? Or indicate this indeed is a bug?\n>\n> What puzzled me is that if a decoder is restarted in between the PREPARE\n> and the COMMIT PREPARED, it repeats the entire transaction, despite it\n> being already sent and potentially prepared on the receiving side.\n>\n> In terms of `pg_logical_slot_get_changes` (and roughly from the\n> prepare.sql test), this looks as follows:\n>\n> data\n> ----------------------------------------------------\n> BEGIN\n> table public.test_prepared1: INSERT: id[integer]:1\n> PREPARE TRANSACTION 'test_prepared#1'\n> (3 rows)\n>\n>\n> This is the first delivery of the transaction. After a restart, it will\n> get all of the changes again, though:\n>\n>\n> data\n> ----------------------------------------------------\n> BEGIN\n> table public.test_prepared1: INSERT: id[integer]:1\n> PREPARE TRANSACTION 'test_prepared#1'\n> COMMIT PREPARED 'test_prepared#1'\n> (4 rows)\n>\n>\n> I did not expect this, as any receiver that wants to have decoded 2PC is\n> likely supporting some kind of two-phase commits itself. And would\n> therefore prepare the transaction upon its first reception. Potentially\n> receiving it a second time would require complicated filtering on every\n> prepared transaction.\n>\n\nThe reason was mentioned in ReorderBufferFinishPrepared(). See below\ncomments in code:\n/*\n * It is possible that this transaction is not decoded at prepare time\n * either because by that time we didn't have a consistent snapshot or it\n * was decoded earlier but we have restarted. We can't distinguish between\n * those two cases so we send the prepare in both the cases and let\n * downstream decide whether to process or skip it. We don't need to\n * decode the xact for aborts if it is not done already.\n */\nThis won't happen when we replicate via pgoutput (the patch for which\nis still not committed) because it won't restart from a previous point\n(unless the server needs to be restarted due to some reason) as you\nare doing via logical decoding APIs. Now, we don't send again the\nprepared xacts on repeated calls of pg_logical_slot_get_changes()\nunless we encounter commit. This behavior is already explained in docs\n[1] (See, Transaction Begin Prepare Callback in docs) and a way to\nskip the prepare.\n\n> Furthermore, this clearly and unnecessarily holds back the restart LSN.\n> Meaning even just a single prepared transaction can block advancing the\n> restart LSN. In most cases, these are short lived. But on the other\n> hand, there may be an arbitrary amount of other transactions in between\n> a PREPARE and the corresponding COMMIT PREPARED in the WAL. Not being\n> able to advance over a prepared transaction seems like a bad thing in\n> such a case.\n>\n\nThat anyway is true without this work as well where restart_lsn can be\nadvanced on commits. We haven't changed anything in that regard.\n\n[1] - https://www.postgresql.org/docs/devel/logicaldecoding-output-plugin.html\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 Feb 2021 15:43:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hello Amit,\n\nthanks for your very quick response.\n\nOn 08.02.21 11:13, Amit Kapila wrote:\n> /*\n> * It is possible that this transaction is not decoded at prepare time\n> * either because by that time we didn't have a consistent snapshot or it\n> * was decoded earlier but we have restarted. We can't distinguish between\n> * those two cases so we send the prepare in both the cases and let\n> * downstream decide whether to process or skip it. We don't need to\n> * decode the xact for aborts if it is not done already.\n> */\n\nThe way I read the surrounding code, the only case a 2PC transaction \ndoes not get decoded a prepare time is if the transaction is empty. Or \nare you aware of any other situation that might currently happen?\n\n> (unless the server needs to be restarted due to some reason)\n\nRight, the repetition occurs only after a restart of the walsender in \nbetween a prepare and a commit prepared record.\n\n> That anyway is true without this work as well where restart_lsn can be\n> advanced on commits. We haven't changed anything in that regard.\n\nI did not mean to blame the patch, but merely try to understand some of \nthe design decisions behind it.\n\nAnd as I just learned, even if we managed to avoid the repetition, a \nrestarted walsender still needs to see prepared transactions as \nin-progress in its snapshots. So we cannot move forward the restart_lsn \nto after a prepare record (until the final commit or rollback is consumed).\n\nBest Regards\n\nMarkus\n\n\n", "msg_date": "Mon, 8 Feb 2021 16:06:30 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 8, 2021 at 8:36 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> Hello Amit,\n>\n> thanks for your very quick response.\n>\n> On 08.02.21 11:13, Amit Kapila wrote:\n> > /*\n> > * It is possible that this transaction is not decoded at prepare time\n> > * either because by that time we didn't have a consistent snapshot or it\n> > * was decoded earlier but we have restarted. We can't distinguish between\n> > * those two cases so we send the prepare in both the cases and let\n> > * downstream decide whether to process or skip it. We don't need to\n> > * decode the xact for aborts if it is not done already.\n> > */\n>\n> The way I read the surrounding code, the only case a 2PC transaction\n> does not get decoded a prepare time is if the transaction is empty. Or\n> are you aware of any other situation that might currently happen?\n>\n\nWe also skip decoding at prepare time if we haven't reached a\nconsistent snapshot by that time. See below code in DecodePrepare().\nDecodePrepare()\n{\n..\n/* We can't start streaming unless a consistent state is reached. */\nif (SnapBuildCurrentState(builder) < SNAPBUILD_CONSISTENT)\n{\nReorderBufferSkipPrepare(ctx->reorder, xid);\nreturn;\n}\n..\n}\n\nThere are other reasons as well like the output plugin doesn't want to\nallow decoding at prepare time but I don't think those are relevant to\nthe discussion here.\n\n> > (unless the server needs to be restarted due to some reason)\n>\n> Right, the repetition occurs only after a restart of the walsender in\n> between a prepare and a commit prepared record.\n>\n> > That anyway is true without this work as well where restart_lsn can be\n> > advanced on commits. We haven't changed anything in that regard.\n>\n> I did not mean to blame the patch, but merely try to understand some of\n> the design decisions behind it.\n>\n> And as I just learned, even if we managed to avoid the repetition, a\n> restarted walsender still needs to see prepared transactions as\n> in-progress in its snapshots. So we cannot move forward the restart_lsn\n> to after a prepare record (until the final commit or rollback is consumed).\n>\n\nRight and say if we forget the prepared transactions and move forward\nwith restart_lsn once we get the prepare for any transaction. Then we\nwill open up a window where we haven't actually sent the prepared xact\nbecause of say \"snapshot has not yet reached consistent state\" and we\nhave moved the restart_lsn. Then later when we get the commit\ncorresponding to the prepared transaction by which time say the\n\"snapshot has reached consistent state\" then we will miss sending the\ntransaction contents and prepare for it. I think for such reasons we\nallow restart_lsn to moved only once the transaction is finished\n(committed or rolled back).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Feb 2021 08:32:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 9, 2021 at 8:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 8, 2021 at 8:36 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> >\n> > Hello Amit,\n> >\n> > thanks for your very quick response.\n> >\n> > On 08.02.21 11:13, Amit Kapila wrote:\n> > > /*\n> > > * It is possible that this transaction is not decoded at prepare time\n> > > * either because by that time we didn't have a consistent snapshot or it\n> > > * was decoded earlier but we have restarted. We can't distinguish between\n> > > * those two cases so we send the prepare in both the cases and let\n> > > * downstream decide whether to process or skip it. We don't need to\n> > > * decode the xact for aborts if it is not done already.\n> > > */\n> >\n> > The way I read the surrounding code, the only case a 2PC transaction\n> > does not get decoded a prepare time is if the transaction is empty. Or\n> > are you aware of any other situation that might currently happen?\n> >\n>\n> We also skip decoding at prepare time if we haven't reached a\n> consistent snapshot by that time. See below code in DecodePrepare().\n> DecodePrepare()\n> {\n> ..\n> /* We can't start streaming unless a consistent state is reached. */\n> if (SnapBuildCurrentState(builder) < SNAPBUILD_CONSISTENT)\n> {\n> ReorderBufferSkipPrepare(ctx->reorder, xid);\n> return;\n> }\n> ..\n> }\n\nCan you please provide steps which can lead to this situation? If\nthere is an earlier discussion which has example scenarios, please\npoint us to the relevant thread.\n\nIf we are not sending PREPARED transactions that's fine, but sending\nthe same prepared transaction as many times as the WAL sender is\nrestarted between sending prepare and commit prepared is a waste of\nnetwork bandwidth. The wastage is proportional to the amount of\nchanges in the transaction and number of such transactions themselves.\nAlso this will cause performance degradation. So if we can avoid\nresending prepared transactions twice that will help.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 9 Feb 2021 11:29:14 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 9, 2021 at 4:59 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n\n> Can you please provide steps which can lead to this situation? If\n> there is an earlier discussion which has example scenarios, please\n> point us to the relevant thread.\n>\n> If we are not sending PREPARED transactions that's fine, but sending\n> the same prepared transaction as many times as the WAL sender is\n> restarted between sending prepare and commit prepared is a waste of\n> network bandwidth. The wastage is proportional to the amount of\n> changes in the transaction and number of such transactions themselves.\n> Also this will cause performance degradation. So if we can avoid\n> resending prepared transactions twice that will help.\n\nOne of this scenario is explained in the test case in\n\npostgres/contrib/test_decoding/specs/twophase_snapshot.spec\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Tue, 9 Feb 2021 17:27:29 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 9, 2021 at 11:29 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 8:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Feb 8, 2021 at 8:36 PM Markus Wanner\n> > <markus.wanner@enterprisedb.com> wrote:\n> > >\n> > > Hello Amit,\n> > >\n> > > thanks for your very quick response.\n> > >\n> > > On 08.02.21 11:13, Amit Kapila wrote:\n> > > > /*\n> > > > * It is possible that this transaction is not decoded at prepare time\n> > > > * either because by that time we didn't have a consistent snapshot or it\n> > > > * was decoded earlier but we have restarted. We can't distinguish between\n> > > > * those two cases so we send the prepare in both the cases and let\n> > > > * downstream decide whether to process or skip it. We don't need to\n> > > > * decode the xact for aborts if it is not done already.\n> > > > */\n> > >\n> > > The way I read the surrounding code, the only case a 2PC transaction\n> > > does not get decoded a prepare time is if the transaction is empty. Or\n> > > are you aware of any other situation that might currently happen?\n> > >\n> >\n> > We also skip decoding at prepare time if we haven't reached a\n> > consistent snapshot by that time. See below code in DecodePrepare().\n> > DecodePrepare()\n> > {\n> > ..\n> > /* We can't start streaming unless a consistent state is reached. */\n> > if (SnapBuildCurrentState(builder) < SNAPBUILD_CONSISTENT)\n> > {\n> > ReorderBufferSkipPrepare(ctx->reorder, xid);\n> > return;\n> > }\n> > ..\n> > }\n>\n> Can you please provide steps which can lead to this situation?\n>\n\nAjin has already shared the example with you.\n\n> If\n> there is an earlier discussion which has example scenarios, please\n> point us to the relevant thread.\n>\n\nIt started in the email [1] and from there you can read later emails\nto know more about this.\n\n> If we are not sending PREPARED transactions that's fine,\n>\n\nHmm, I am not sure if that is fine because if the output plugin sets\nthe two-phase-commit option, it would expect all prepared xacts to\narrive not some only some of them.\n\n> but sending\n> the same prepared transaction as many times as the WAL sender is\n> restarted between sending prepare and commit prepared is a waste of\n> network bandwidth.\n>\n\nI think similar happens without any of the work done in PG-14 as well\nif we restart the apply worker before the commit completes on the\nsubscriber. After the restart, we will send the start_decoding_at\npoint based on some previous commit which will make publisher send the\nentire transaction again. I don't think restart of WAL sender or WAL\nreceiver is such a common thing. It can only happen due to some bug in\ncode or user wishes to stop the nodes or some crash happened.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2Bd3gzCyzsYjt1m6sfGf_C_uFmo9JK%3D3Wafp6yR8Mg8uQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Feb 2021 17:27:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 9, 2021 at 6:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think similar happens without any of the work done in PG-14 as well\n> if we restart the apply worker before the commit completes on the\n> subscriber. After the restart, we will send the start_decoding_at\n> point based on some previous commit which will make publisher send the\n> entire transaction again. I don't think restart of WAL sender or WAL\n> receiver is such a common thing. It can only happen due to some bug in\n> code or user wishes to stop the nodes or some crash happened.\n\nReally? My impression is that the logical replication protocol is\nsupposed to be designed in such a way that once a transaction is\nsuccessfully confirmed, it won't be sent again. Now if something is\nnot confirmed then it has to be sent again. But if it is confirmed\nthen it shouldn't happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 13:38:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 10, 2021 at 12:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 6:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think similar happens without any of the work done in PG-14 as well\n> > if we restart the apply worker before the commit completes on the\n> > subscriber. After the restart, we will send the start_decoding_at\n> > point based on some previous commit which will make publisher send the\n> > entire transaction again. I don't think restart of WAL sender or WAL\n> > receiver is such a common thing. It can only happen due to some bug in\n> > code or user wishes to stop the nodes or some crash happened.\n>\n> Really? My impression is that the logical replication protocol is\n> supposed to be designed in such a way that once a transaction is\n> successfully confirmed, it won't be sent again. Now if something is\n> not confirmed then it has to be sent again. But if it is confirmed\n> then it shouldn't happen.\n>\n\nIf by successfully confirmed, you mean that once the subscriber node\nhas received, it won't be sent again then as far as I know that is not\ntrue. We rely on the flush location sent by the subscriber to advance\nthe decoding locations. We update the flush locations after we apply\nthe transaction's commit successfully. Also, after the restart, we use\nthe replication origin's last flush location as a point from where we\nneed the transactions and the origin's progress is updated at commit\ntime.\n\nOTOH, If by successfully confirmed, you mean that once the subscriber\nhas applied the complete transaction (including commit), then you are\nright that it won't be sent again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Feb 2021 08:02:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 10, 2021 at 8:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Feb 10, 2021 at 12:08 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >\n> > On Tue, Feb 9, 2021 at 6:57 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > I think similar happens without any of the work done in PG-14 as well\n> > > if we restart the apply worker before the commit completes on the\n> > > subscriber. After the restart, we will send the start_decoding_at\n> > > point based on some previous commit which will make publisher send the\n> > > entire transaction again. I don't think restart of WAL sender or WAL\n> > > receiver is such a common thing. It can only happen due to some bug in\n> > > code or user wishes to stop the nodes or some crash happened.\n> >\n> > Really? My impression is that the logical replication protocol is\n> > supposed to be designed in such a way that once a transaction is\n> > successfully confirmed, it won't be sent again. Now if something is\n> > not confirmed then it has to be sent again. But if it is confirmed\n> > then it shouldn't happen.\n> >\n>\n> If by successfully confirmed, you mean that once the subscriber node\n> has received, it won't be sent again then as far as I know that is not\n> true. We rely on the flush location sent by the subscriber to advance\n> the decoding locations. We update the flush locations after we apply\n> the transaction's commit successfully. Also, after the restart, we use\n> the replication origin's last flush location as a point from where we\n> need the transactions and the origin's progress is updated at commit\n> time.\n>\n> OTOH, If by successfully confirmed, you mean that once the subscriber\n> has applied the complete transaction (including commit), then you are\n> right that it won't be sent again.\n>\n\nI think we need to treat a prepared transaction slightly different from an\nuncommitted transaction when sending downstream. We need to send a whole\nuncommitted transaction downstream again because previously applied changes\nmust have been aborted and hence lost by the downstream and thus it needs\nto get all of those again. But when a downstream prepares a transaction,\neven if it's not committed, those changes are not lost even after restart\nof a walsender. If the downstream confirms that it has \"flushed\" PREPARE,\nthere is no need to send all the changes again.\n\n--\nBest Wishes,\nAshutosh\n\nOn Wed, Feb 10, 2021 at 8:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Feb 10, 2021 at 12:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 6:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think similar happens without any of the work done in PG-14 as well\n> > if we restart the apply worker before the commit completes on the\n> > subscriber. After the restart, we will send the start_decoding_at\n> > point based on some previous commit which will make publisher send the\n> > entire transaction again. I don't think restart of WAL sender or WAL\n> > receiver is such a common thing. It can only happen due to some bug in\n> > code or user wishes to stop the nodes or some crash happened.\n>\n> Really? My impression is that the logical replication protocol is\n> supposed to be designed in such a way that once a transaction is\n> successfully confirmed, it won't be sent again. Now if something is\n> not confirmed then it has to be sent again. But if it is confirmed\n> then it shouldn't happen.\n>\n\nIf by successfully confirmed, you mean that once the subscriber node\nhas received, it won't be sent again then as far as I know that is not\ntrue. We rely on the flush location sent by the subscriber to advance\nthe decoding locations. We update the flush locations after we apply\nthe transaction's commit successfully. Also, after the restart, we use\nthe replication origin's last flush location as a point from where we\nneed the transactions and the origin's progress is updated at commit\ntime.\n\nOTOH, If by successfully confirmed, you mean that once the subscriber\nhas applied the complete transaction (including commit), then you are\nright that it won't be sent again.I think we need to treat a prepared transaction slightly different from an uncommitted transaction when sending downstream. We need to send a whole uncommitted transaction downstream again because previously applied changes must have been aborted and hence lost by the downstream and thus it needs to get all of those again. But when a downstream prepares a transaction, even if it's not committed, those changes are not lost even after restart of a walsender. If the downstream confirms that it has \"flushed\" PREPARE, there is no need to send all the changes again.--Best Wishes,Ashutosh", "msg_date": "Wed, 10 Feb 2021 10:13:48 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 10, 2021 at 3:43 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n\n\n> I think we need to treat a prepared transaction slightly different from an uncommitted transaction when sending downstream. We need to send a whole uncommitted transaction downstream again because previously applied changes must have been aborted and hence lost by the downstream and thus it needs to get all of those again. But when a downstream prepares a transaction, even if it's not committed, those changes are not lost even after restart of a walsender. If the downstream confirms that it has \"flushed\" PREPARE, there is no need to send all the changes again.\n\nBut the other side of the problem is that ,without this, if the\nprepared transaction is prior to a consistent snapshot when decoding\nstarts/restarts, then only the \"commit prepared\" is sent to downstream\n(as seen in the test scenario I shared above), and downstream has to\nerror away the commit prepared because it does not have the\ncorresponding prepared transaction. We did not find an easy way to\ndistinguish between these two scenarios for prepared transactions.\na. A consistent snapshot being formed in between a prepare and a\ncommit prepared for the first time.\nb. Decoder restarting between a prepare and a commit prepared.\n\nFor plugins to be able to handle this, we have added a special\ncallback \"Begin Prepare\" as explained in [1] section 49.6.4.10\n\n\"The required begin_prepare_cb callback is called whenever the start\nof a prepared transaction has been decoded. The gid field, which is\npart of the txn parameter can be used in this callback to check if the\nplugin has already received this prepare in which case it can skip the\nremaining changes of the transaction. This can only happen if the user\nrestarts the decoding after receiving the prepare for a transaction\nbut before receiving the commit prepared say because of some error.\"\n\nThe pgoutput plugin is also being updated to be able to handle this\nsituation of duplicate prepared transactions.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Wed, 10 Feb 2021 17:14:59 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 10, 2021 at 11:45 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Wed, Feb 10, 2021 at 3:43 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n>\n>\n> > I think we need to treat a prepared transaction slightly different from an uncommitted transaction when sending downstream. We need to send a whole uncommitted transaction downstream again because previously applied changes must have been aborted and hence lost by the downstream and thus it needs to get all of those again. But when a downstream prepares a transaction, even if it's not committed, those changes are not lost even after restart of a walsender. If the downstream confirms that it has \"flushed\" PREPARE, there is no need to send all the changes again.\n>\n> But the other side of the problem is that ,without this, if the\n> prepared transaction is prior to a consistent snapshot when decoding\n> starts/restarts, then only the \"commit prepared\" is sent to downstream\n> (as seen in the test scenario I shared above), and downstream has to\n> error away the commit prepared because it does not have the\n> corresponding prepared transaction.\n>\n\nI think it is not only simple error handling, it is required for\ndata-consistency. We need to send the transactions whose commits are\nencountered after a consistent snapshot is reached.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Feb 2021 12:02:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On 10.02.21 07:32, Amit Kapila wrote:\n> On Wed, Feb 10, 2021 at 11:45 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>> But the other side of the problem is that ,without this, if the\n>> prepared transaction is prior to a consistent snapshot when decoding\n>> starts/restarts, then only the \"commit prepared\" is sent to downstream\n>> (as seen in the test scenario I shared above), and downstream has to\n>> error away the commit prepared because it does not have the\n>> corresponding prepared transaction.\n> \n> I think it is not only simple error handling, it is required for\n> data-consistency. We need to send the transactions whose commits are\n> encountered after a consistent snapshot is reached.\n\nI'm with Ashutosh here. If a replica is properly in sync, it knows \nabout prepared transactions and all the gids of those. Sending the \ntransactional changes and the prepare again is inconsistent.\n\nThe point of a two-phase transaction is to have two phases. An output \nplugin must have the chance of treating them as independent events. \nOnce a PREPARE is confirmed, it must not be sent again. Even if the \ntransaction is still in-progress and its changes are not yet visible on \nthe origin node.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Wed, 10 Feb 2021 09:10:19 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 10, 2021 at 1:40 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 10.02.21 07:32, Amit Kapila wrote:\n> > On Wed, Feb 10, 2021 at 11:45 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >> But the other side of the problem is that ,without this, if the\n> >> prepared transaction is prior to a consistent snapshot when decoding\n> >> starts/restarts, then only the \"commit prepared\" is sent to downstream\n> >> (as seen in the test scenario I shared above), and downstream has to\n> >> error away the commit prepared because it does not have the\n> >> corresponding prepared transaction.\n> >\n> > I think it is not only simple error handling, it is required for\n> > data-consistency. We need to send the transactions whose commits are\n> > encountered after a consistent snapshot is reached.\n>\n> I'm with Ashutosh here. If a replica is properly in sync, it knows\n> about prepared transactions and all the gids of those. Sending the\n> transactional changes and the prepare again is inconsistent.\n>\n> The point of a two-phase transaction is to have two phases. An output\n> plugin must have the chance of treating them as independent events.\n>\n\nI am not sure I understand what problem you are facing to deal with\nthis in the output plugin, it is explained in docs and Ajin also\npointed out the same. Ajin and I have explained to you the design\nconstraints on the publisher-side due to which we have done this way.\nDo you have any better ideas to deal with this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Feb 2021 15:40:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 9, 2021 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> If by successfully confirmed, you mean that once the subscriber node\n> has received, it won't be sent again then as far as I know that is not\n> true. We rely on the flush location sent by the subscriber to advance\n> the decoding locations. We update the flush locations after we apply\n> the transaction's commit successfully. Also, after the restart, we use\n> the replication origin's last flush location as a point from where we\n> need the transactions and the origin's progress is updated at commit\n> time.\n>\n> OTOH, If by successfully confirmed, you mean that once the subscriber\n> has applied the complete transaction (including commit), then you are\n> right that it won't be sent again.\n\nI meant - once the subscriber has advanced the flush location.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Feb 2021 10:06:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 8, 2021 at 2:01 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> I did not expect this, as any receiver that wants to have decoded 2PC is\n> likely supporting some kind of two-phase commits itself. And would\n> therefore prepare the transaction upon its first reception. Potentially\n> receiving it a second time would require complicated filtering on every\n> prepared transaction.\n>\n\nI would like to bring one other scenario to your notice where you\nmight want to handle things differently for prepared transactions on\nthe plugin side. Assume we have multiple publications (for simplicity\nsay 2) on publisher with corresponding subscriptions (say 2, each\ncorresponding to one publication on the publisher). When a user\nperforms a transaction on a publisher that involves the tables from\nboth publications, on the subscriber-side, we do that work via two\ndifferent transactions, corresponding to each subscription. But, we\nneed some way to deal with prepared xacts because they need GID and we\ncan't use the same GID for both subscriptions. Please see the detailed\nexample and one idea to deal with the same in the main thread[1]. It\nwould be really helpful if you or others working on the plugin side\ncan share your opinion on the same.\n\nNow, coming back to the restart case where the prepared transaction\ncan be sent again by the publisher. I understand yours and others\npoint that we should not send prepared transaction if there is a\nrestart between prepare and commit but there are reasons why we have\ndone that way and I am open to your suggestions. I'll once again try\nto explain the exact case to you which is not very apparent. The basic\nidea is that we ship/replay all transactions where commit happens\nafter the snapshot has a consistent state (SNAPBUILD_CONSISTENT), see\natop snapbuild.c for details. Now, for transactions where prepare is\nbefore snapshot state SNAPBUILD_CONSISTENT and commit prepared is\nafter SNAPBUILD_CONSISTENT, we need to send the entire transaction\nincluding prepare at the commit time. One might think it is quite easy\nto detect that, basically if we skip prepare when the snapshot state\nwas not SNAPBUILD_CONSISTENT, then mark a flag in ReorderBufferTxn and\nuse the same to detect during commit and accordingly take the decision\nto send prepare but unfortunately it is not that easy. There is always\na chance that on restart we reuse the snapshot serialized by some\nother Walsender at a location prior to Prepare and if that happens\nthen this time the prepare won't be skipped due to snapshot state\n(SNAPBUILD_CONSISTENT) but due to start_decodint_at point (considering\nwe have already shipped some of the later commits but not prepare).\nNow, this will actually become the same situation where the restart\nhas happened after we have sent the prepare but not commit. This is\nthe reason we have to resend the prepare when the subscriber restarts\nbetween prepare and commit.\n\nYou can reproduce the case where we can't distinguish between two\nsituations by using the test case in twophase_snapshot.spec and\nadditionally starting a separate session via the debugger. So, the\nsteps in the test case are as below:\n\n\"s2b\" \"s2txid\" \"s1init\" \"s3b\" \"s3txid\" \"s2c\" \"s2b\" \"s2insert\" \"s2p\"\n\"s3c\" \"s1insert\" \"s1start\" \"s2cp\" \"s1start\"\n\nDefine new steps as\n\n\"s4init\" {SELECT 'init' FROM\npg_create_logical_replication_slot('isolation_slot_1',\n'test_decoding');}\n\"s4start\" {SELECT data FROM\npg_logical_slot_get_changes('isolation_slot_1', NULL, NULL,\n'include-xids', 'false', 'skip-empty-xacts', '1', 'two-phase-commit',\n'1');}\n\nThe first thing we need to do is s4init and stop the debugger in\nSnapBuildProcessRunningXacts. Now perform steps from 's2b' till first\n's1start' in twophase_snapshot.spec. Then continue in the s4 session\nand perform s4start. After this, if you debug (or add the logs) the\nsecond s1start, you will notice that we are skipping prepare not\nbecause of inconsistent snapshot but a forward location in\nstart_decoding_at. If you don't involve session-4, then it will always\nskip prepare due to an inconsistent snapshot state. This involves a\ndebugger so not easy to write an automated test for it.\n\nI have used a bit tricky scenario to explain this but not sure if\nthere was any other simpler way.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BLvkeX%3DB3xon7RcBwD4CVaFSryPj3pTBAALrDxQVPDwA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Feb 2021 16:06:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hello Amit,\n\nthanks a lot for your extensive explanation and examples, I appreciate \nthis very much. I'll need to think this through and see how we can make \nthis work for us.\n\nBest Regards\n\nMarkus\n\n\n", "msg_date": "Thu, 11 Feb 2021 13:16:49 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Thu, Feb 11, 2021 at 5:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> to explain the exact case to you which is not very apparent. The basic\n> idea is that we ship/replay all transactions where commit happens\n> after the snapshot has a consistent state (SNAPBUILD_CONSISTENT), see\n> atop snapbuild.c for details. Now, for transactions where prepare is\n> before snapshot state SNAPBUILD_CONSISTENT and commit prepared is\n> after SNAPBUILD_CONSISTENT, we need to send the entire transaction\n> including prepare at the commit time.\n\nThis might be a dumb question, but: why?\n\nIs this because the effects of the prepared transaction might\notherwise be included neither in the initial synchronization of the\ndata nor in any subsequently decoded transaction, thus leaving the\nreplica out of sync?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 14:40:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 12, 2021 at 1:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Feb 11, 2021 at 5:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > to explain the exact case to you which is not very apparent. The basic\n> > idea is that we ship/replay all transactions where commit happens\n> > after the snapshot has a consistent state (SNAPBUILD_CONSISTENT), see\n> > atop snapbuild.c for details. Now, for transactions where prepare is\n> > before snapshot state SNAPBUILD_CONSISTENT and commit prepared is\n> > after SNAPBUILD_CONSISTENT, we need to send the entire transaction\n> > including prepare at the commit time.\n>\n> This might be a dumb question, but: why?\n>\n> Is this because the effects of the prepared transaction might\n> otherwise be included neither in the initial synchronization of the\n> data nor in any subsequently decoded transaction, thus leaving the\n> replica out of sync?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Feb 2021 16:03:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-10 08:02:17 +0530, Amit Kapila wrote:\n> On Wed, Feb 10, 2021 at 12:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Feb 9, 2021 at 6:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I think similar happens without any of the work done in PG-14 as well\n> > > if we restart the apply worker before the commit completes on the\n> > > subscriber. After the restart, we will send the start_decoding_at\n> > > point based on some previous commit which will make publisher send the\n> > > entire transaction again. I don't think restart of WAL sender or WAL\n> > > receiver is such a common thing. It can only happen due to some bug in\n> > > code or user wishes to stop the nodes or some crash happened.\n> >\n> > Really? My impression is that the logical replication protocol is\n> > supposed to be designed in such a way that once a transaction is\n> > successfully confirmed, it won't be sent again. Now if something is\n> > not confirmed then it has to be sent again. But if it is confirmed\n> > then it shouldn't happen.\n\nCorrect.\n\n\n> If by successfully confirmed, you mean that once the subscriber node\n> has received, it won't be sent again then as far as I know that is not\n> true. We rely on the flush location sent by the subscriber to advance\n> the decoding locations. We update the flush locations after we apply\n> the transaction's commit successfully. Also, after the restart, we use\n> the replication origin's last flush location as a point from where we\n> need the transactions and the origin's progress is updated at commit\n> time.\n\nThat's not quite right. Yes, the flush location isn't guaranteed to be\nupdated at that point, but a replication client will send the last\nlocation they've received and successfully processed, and that has to\n*guarantee* that they won't receive anything twice, or miss\nsomething. Otherwise you've broken the protocol.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 13 Feb 2021 08:32:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "> \n> On 13 Feb 2021, at 17:32, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2021-02-10 08:02:17 +0530, Amit Kapila wrote:\n>> On Wed, Feb 10, 2021 at 12:08 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>>> \n>> If by successfully confirmed, you mean that once the subscriber node\n>> has received, it won't be sent again then as far as I know that is not\n>> true. We rely on the flush location sent by the subscriber to advance\n>> the decoding locations. We update the flush locations after we apply\n>> the transaction's commit successfully. Also, after the restart, we use\n>> the replication origin's last flush location as a point from where we\n>> need the transactions and the origin's progress is updated at commit\n>> time.\n> \n> That's not quite right. Yes, the flush location isn't guaranteed to be\n> updated at that point, but a replication client will send the last\n> location they've received and successfully processed, and that has to\n> *guarantee* that they won't receive anything twice, or miss\n> something. Otherwise you've broken the protocol.\n> \n\nAgreed, if we relied purely on flush location of a slot, there would be no need for origins to track the lsn. AFAIK this is exactly why origins are Wal logged along with transaction, it allows us to guarantee never getting anything that has beed durably written.\n\n—\nPetr\n\n", "msg_date": "Sat, 13 Feb 2021 17:37:29 +0100", "msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-13 17:37:29 +0100, Petr Jelinek wrote:\n> Agreed, if we relied purely on flush location of a slot, there would\n> be no need for origins to track the lsn.\n\nAnd we would be latency bound replicating transactions, which'd not be\nfun for single-insert ones for example...\n\n\n> AFAIK this is exactly why origins are Wal logged along with\n> transaction, it allows us to guarantee never getting anything that has\n> beed durably written.\n\nI think you'd need something like origins in that case, because\nsomething could still go wrong before the other side has received the\nflush (network disconnect, primary crash, ...).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 13 Feb 2021 08:53:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 13, 2021 at 10:23 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-02-13 17:37:29 +0100, Petr Jelinek wrote:\n>\n> > AFAIK this is exactly why origins are Wal logged along with\n> > transaction, it allows us to guarantee never getting anything that has\n> > beed durably written.\n>\n> I think you'd need something like origins in that case, because\n> something could still go wrong before the other side has received the\n> flush (network disconnect, primary crash, ...).\n>\n\nWe are already using origins in apply-worker to guarantee that and\nwith each commit, the origin's lsn location is also WAL-logged. That\nhelps us to send the start location for a slot after the restart. As\nfar as I understand this is how it works from the apply-worker side. I\nam not sure if I am missing something here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 15 Feb 2021 09:24:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Thu, Feb 11, 2021 at 4:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 8, 2021 at 2:01 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> >\n> Now, coming back to the restart case where the prepared transaction\n> can be sent again by the publisher. I understand yours and others\n> point that we should not send prepared transaction if there is a\n> restart between prepare and commit but there are reasons why we have\n> done that way and I am open to your suggestions. I'll once again try\n> to explain the exact case to you which is not very apparent. The basic\n> idea is that we ship/replay all transactions where commit happens\n> after the snapshot has a consistent state (SNAPBUILD_CONSISTENT), see\n> atop snapbuild.c for details. Now, for transactions where prepare is\n> before snapshot state SNAPBUILD_CONSISTENT and commit prepared is\n> after SNAPBUILD_CONSISTENT, we need to send the entire transaction\n> including prepare at the commit time. One might think it is quite easy\n> to detect that, basically if we skip prepare when the snapshot state\n> was not SNAPBUILD_CONSISTENT, then mark a flag in ReorderBufferTxn and\n> use the same to detect during commit and accordingly take the decision\n> to send prepare but unfortunately it is not that easy. There is always\n> a chance that on restart we reuse the snapshot serialized by some\n> other Walsender at a location prior to Prepare and if that happens\n> then this time the prepare won't be skipped due to snapshot state\n> (SNAPBUILD_CONSISTENT) but due to start_decodint_at point (considering\n> we have already shipped some of the later commits but not prepare).\n> Now, this will actually become the same situation where the restart\n> has happened after we have sent the prepare but not commit. This is\n> the reason we have to resend the prepare when the subscriber restarts\n> between prepare and commit.\n>\n\nAfter further thinking on this problem and some off-list discussions\nwith Ajin, there appears to be another way to solve the above problem\nby which we can avoid resending the prepare after restart if it has\nalready been processed by the subscriber. The main reason why we were\nnot able to distinguish between the two cases ((a) prepare happened\nbefore SNAPBUILD_CONSISTENT state but commit prepared happened after\nwe reach SNAPBUILD_CONSISTENT state and (b) prepare is already\ndecoded, successfully processed by the subscriber and we have\nrestarted the decoding) is that we can re-use the serialized snapshot\nat LSN location prior to Prepare of some concurrent WALSender after\nthe restart. Now, if we ensure that we don't use serialized snapshots\nfor decoding via slots where two_phase decoding option is enabled then\nwe won't have that problem. The drawback is that in some cases it can\ntake a bit more time for initial snapshot building but maybe that is\nbetter than the current solution.\n\nAny suggestions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Feb 2021 09:43:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 16, 2021 at 9:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Feb 11, 2021 at 4:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Feb 8, 2021 at 2:01 PM Markus Wanner\n> > <markus.wanner@enterprisedb.com> wrote:\n> > >\n> > Now, coming back to the restart case where the prepared transaction\n> > can be sent again by the publisher. I understand yours and others\n> > point that we should not send prepared transaction if there is a\n> > restart between prepare and commit but there are reasons why we have\n> > done that way and I am open to your suggestions. I'll once again try\n> > to explain the exact case to you which is not very apparent. The basic\n> > idea is that we ship/replay all transactions where commit happens\n> > after the snapshot has a consistent state (SNAPBUILD_CONSISTENT), see\n> > atop snapbuild.c for details. Now, for transactions where prepare is\n> > before snapshot state SNAPBUILD_CONSISTENT and commit prepared is\n> > after SNAPBUILD_CONSISTENT, we need to send the entire transaction\n> > including prepare at the commit time. One might think it is quite easy\n> > to detect that, basically if we skip prepare when the snapshot state\n> > was not SNAPBUILD_CONSISTENT, then mark a flag in ReorderBufferTxn and\n> > use the same to detect during commit and accordingly take the decision\n> > to send prepare but unfortunately it is not that easy. There is always\n> > a chance that on restart we reuse the snapshot serialized by some\n> > other Walsender at a location prior to Prepare and if that happens\n> > then this time the prepare won't be skipped due to snapshot state\n> > (SNAPBUILD_CONSISTENT) but due to start_decodint_at point (considering\n> > we have already shipped some of the later commits but not prepare).\n> > Now, this will actually become the same situation where the restart\n> > has happened after we have sent the prepare but not commit. This is\n> > the reason we have to resend the prepare when the subscriber restarts\n> > between prepare and commit.\n> >\n>\n> After further thinking on this problem and some off-list discussions\n> with Ajin, there appears to be another way to solve the above problem\n> by which we can avoid resending the prepare after restart if it has\n> already been processed by the subscriber. The main reason why we were\n> not able to distinguish between the two cases ((a) prepare happened\n> before SNAPBUILD_CONSISTENT state but commit prepared happened after\n> we reach SNAPBUILD_CONSISTENT state and (b) prepare is already\n> decoded, successfully processed by the subscriber and we have\n> restarted the decoding) is that we can re-use the serialized snapshot\n> at LSN location prior to Prepare of some concurrent WALSender after\n> the restart. Now, if we ensure that we don't use serialized snapshots\n> for decoding via slots where two_phase decoding option is enabled then\n> we won't have that problem. The drawback is that in some cases it can\n> take a bit more time for initial snapshot building but maybe that is\n> better than the current solution.\n>\n\nI see another thing which we need to address if we have to use the\nabove solution. The issue is if initially the two-pc option for\nsubscription is off and we skipped prepare because of that and then\nsome unrelated commit happened which allowed start_decoding_at point\nto move ahead. And then the user enabled the two-pc option for the\nsubscription, then we will again skip prepare because it is behind\nstart_decoding_at point which becomes the same case where prepare\nseems to have already been sent. So, in such a situation with the\nabove solution, we will miss sending the prepared transaction and its\ndata and hence risk making replica out-of-sync. Now, this can be\navoided if we don't allow users to alter the two-pc option once the\nsubscription is created. I am not sure but maybe for the first version\nof this feature that might be okay and we can improve it later if we\nhave better ideas. This will definitely allow us to avoid checks in\nthe plugins and or apply-worker which seems like a good trade-off and\nit will address the concern most people have raised in this thread.\nAny thoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Feb 2021 13:57:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 16, 2021 at 3:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> After further thinking on this problem and some off-list discussions\n> with Ajin, there appears to be another way to solve the above problem\n> by which we can avoid resending the prepare after restart if it has\n> already been processed by the subscriber. The main reason why we were\n> not able to distinguish between the two cases ((a) prepare happened\n> before SNAPBUILD_CONSISTENT state but commit prepared happened after\n> we reach SNAPBUILD_CONSISTENT state and (b) prepare is already\n> decoded, successfully processed by the subscriber and we have\n> restarted the decoding) is that we can re-use the serialized snapshot\n> at LSN location prior to Prepare of some concurrent WALSender after\n> the restart. Now, if we ensure that we don't use serialized snapshots\n> for decoding via slots where two_phase decoding option is enabled then\n> we won't have that problem. The drawback is that in some cases it can\n> take a bit more time for initial snapshot building but maybe that is\n> better than the current solution.\n\nBased on this suggestion, I have created a patch on HEAD which now\ndoes not allow repeated decoding\nof prepared transactions. For this, the code now enforces\nfull_snapshot if two-phase decoding is enabled.\nDo have a look at the patch and see if you have any comments.\n\nCurrently one problem with this, as you have also mentioned in your\nlast mail, is that if initially two-phase is disabled in\ntest_decoding while\ndecoding prepare (causing the prepared transaction to not be decoded)\nand later enabled after the commit prepared (where it assumes that the\ntransaction was decoded at prepare time), then the transaction is not\ndecoded at all. For eg:\n\npostgres=# begin;\nBEGIN\npostgres=*# INSERT INTO do_write DEFAULT VALUES;\nINSERT 0 1\npostgres=*# PREPARE TRANSACTION 'test1';\nPREPARE TRANSACTION\npostgres=# SELECT data FROM\npg_logical_slot_get_changes('isolation_slot', NULL, NULL,\n'include-xids', 'false', 'skip-empty-xacts', '1', 'two-phase-commit',\n'0');\ndata\n------\n(0 rows)\npostgres=# commit prepared 'test1';\nCOMMIT PREPARED\npostgres=# SELECT data FROM\npg_logical_slot_get_changes('isolation_slot', NULL, NULL,\n'include-xids', 'false', 'skip-empty-xacts', '1', 'two-phase-commit',\n'1');\n data\n-------------------------\nCOMMIT PREPARED 'test1' (1 row)\n\n1st pg_logical_slot_get_changes is called with two-phase-commit off,\n2nd is called with two-phase-commit on. You can see that the\ntransaction is not decoded at all.\nFor this, I am planning to change the semantics such that\ntwo-phase-commit can only be specified while creating the slot using\npg_create_logical_replication_slot()\nand not in pg_logical_slot_get_changes, thus preventing\ntwo-phase-commit flag from being toggled between restarts of the\ndecoder. Let me know if anybody objects to this\nchange, else I will update that in the next patch.\n\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 19 Feb 2021 13:50:52 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Ajin, Amit,\n\nthank you both a lot for thinking this through and even providing a patch.\n\nThe changes in expectation for twophase.out matches exactly with what I \nprepared. And the switch with pg_logical_slot_get_changes indeed is \nsomething I had not yet considered, either.\n\nOn 19.02.21 03:50, Ajin Cherian wrote:\n> For this, I am planning to change the semantics such that\n> two-phase-commit can only be specified while creating the slot using\n> pg_create_logical_replication_slot()\n> and not in pg_logical_slot_get_changes, thus preventing\n> two-phase-commit flag from being toggled between restarts of the\n> decoder. Let me know if anybody objects to this\n> change, else I will update that in the next patch.\n\nThis sounds like a good plan to me, yes.\n\n\nHowever, more generally speaking, I suspect you are overthinking this. \nAll of the complexity arises because of the assumption that an output \nplugin receiving and confirming a PREPARE may not be able to persist \nthat first phase of transaction application. Instead, you are trying to \nsomehow resurrect the transactional changes and the prepare at COMMIT \nPREPARED time and decode it in a deferred way.\n\nInstead, I'm arguing that a PREPARE is an atomic operation just like a \ntransaction's COMMIT. The decoder should always feed these in the order \nof appearance in the WAL. For example, if you have PREAPRE A, COMMIT B, \nCOMMIT PREPARED A in the WAL, the decoder should always output these \nevents in exactly that order. And not ever COMMIT B, PREPARE A, COMMIT \nPREPARED A (which is currently violated in the expectation for \ntwophase_snapshot, because the COMMIT for `s1insert` there appears after \nthe PREPARE of `s2p` in the WAL, but gets decoded before it).\n\nThe patch I'm attaching corrects this expectation in twophase_snapshot, \nadds an explanatory diagram, and eliminates any danger of sending \nPREPAREs at COMMIT PREPARED time. Thereby preserving the ordering of \nPREPAREs vs COMMITs.\n\nGiven the output plugin supports two-phase commit, I argue there must be \na good reason for it setting the start_decoding_at LSN to a point in \ntime after a PREPARE. To me that means the output plugin (or its \ndownstream replica) has processed the PREPARE (and the downstream \nreplica did whatever it needed to do on its side in order to make the \ntransaction ready to be committed in a second phase).\n\n(In the weird case of an output plugin that wants to enable two-phase \ncommit but does not really support it downstream, it's still possible \nfor it to hold back LSN confirmations for prepared-but-still-in-flight \ntransactions. However, I'm having a hard time justifying this use case.)\n\nWith that line of thinking, the point in time (or in WAL) of the COMMIT \nPREPARED does not matter at all to reason about the decoding of the \nPREPARE operation. Instead, there are only exactly two cases to consider:\n\na) the PREPARE happened before the start_decoding_at LSN and must not be \ndecoded. (But the effects of the PREPARE must then be included in the \ninitial synchronization. If that's not supported, the output plugin \nshould not enable two-phase commit.)\n\nb) the PREPARE happens after the start_decoding_at LSN and must be \ndecoded. (It obviously is not included in the initial synchronization \nor decoded by a previous instance of the decoder process.)\n\nThe case where the PREPARE lies before SNAPBUILD_CONSISTENT must always \nbe case a) where we must not repeat the PREPARE, anyway. And in case b) \nwhere we need a consistent snapshot to decode the PREPARE, existing \nprovisions already guarantee that to be possible (or how would this be \ndifferent from a regular single-phase commit?).\n\nPlease let me know what you think and whether this approach is feasible \nfor you as well.\n\nRegards\n\nMarkus", "msg_date": "Fri, 19 Feb 2021 15:53:32 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 19, 2021 at 8:23 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> With that line of thinking, the point in time (or in WAL) of the COMMIT\n> PREPARED does not matter at all to reason about the decoding of the\n> PREPARE operation. Instead, there are only exactly two cases to consider:\n>\n> a) the PREPARE happened before the start_decoding_at LSN and must not be\n> decoded. (But the effects of the PREPARE must then be included in the\n> initial synchronization. If that's not supported, the output plugin\n> should not enable two-phase commit.)\n>\n\nI see a problem with this assumption. During the initial\nsynchronization, this transaction won't be visible to snapshot and we\nwon't copy it. Then later if we won't decode and send it then the\nreplica will be out of sync. Such a problem won't happen with Ajin's\npatch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Feb 2021 09:08:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 19, 2021 at 8:21 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Based on this suggestion, I have created a patch on HEAD which now\n> does not allow repeated decoding\n> of prepared transactions. For this, the code now enforces\n> full_snapshot if two-phase decoding is enabled.\n> Do have a look at the patch and see if you have any comments.\n>\n\nFew minor comments:\n===================\n1.\n.git/rebase-apply/patch:135: trailing whitespace.\n * We need to mark the transaction as prepared, so that we\ndon't resend it on\nwarning: 1 line adds whitespace errors.\n\nWhitespace issue.\n\n2.\n/*\n+ * Set snapshot type\n+ */\n+void\n+SetSnapBuildType(SnapBuild *builder, bool need_full_snapshot)\n\nThere is no caller which passes the second parameter as false, so why\nhave it? Can't we have a function with SetSnapBuildFullSnapshot or\nsomething like that?\n\n3.\n@@ -431,6 +431,10 @@ CreateInitDecodingContext(const char *plugin,\n startup_cb_wrapper(ctx, &ctx->options, true);\n MemoryContextSwitchTo(old_context);\n\n+ /* If two-phase is on, then only full snapshot can be used */\n+ if (ctx->twophase)\n+ SetSnapBuildType(ctx->snapshot_builder, true);\n+\n ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n\n return ctx;\n@@ -534,6 +538,10 @@ CreateDecodingContext(XLogRecPtr start_lsn,\n\n ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n\n+ /* If two-phase is on, then only full snapshot can be used */\n+ if (ctx->twophase)\n+ SetSnapBuildType(ctx->snapshot_builder, true);\n\nI think it is better to add a detailed comment on why we are doing\nthis? You can write the comment in one of the places.\n\n> Currently one problem with this, as you have also mentioned in your\n> last mail, is that if initially two-phase is disabled in\n> test_decoding while\n> decoding prepare (causing the prepared transaction to not be decoded)\n> and later enabled after the commit prepared (where it assumes that the\n> transaction was decoded at prepare time), then the transaction is not\n> decoded at all. For eg:\n>\n> postgres=# begin;\n> BEGIN\n> postgres=*# INSERT INTO do_write DEFAULT VALUES;\n> INSERT 0 1\n> postgres=*# PREPARE TRANSACTION 'test1';\n> PREPARE TRANSACTION\n> postgres=# SELECT data FROM\n> pg_logical_slot_get_changes('isolation_slot', NULL, NULL,\n> 'include-xids', 'false', 'skip-empty-xacts', '1', 'two-phase-commit',\n> '0');\n> data\n> ------\n> (0 rows)\n> postgres=# commit prepared 'test1';\n> COMMIT PREPARED\n> postgres=# SELECT data FROM\n> pg_logical_slot_get_changes('isolation_slot', NULL, NULL,\n> 'include-xids', 'false', 'skip-empty-xacts', '1', 'two-phase-commit',\n> '1');\n> data\n> -------------------------\n> COMMIT PREPARED 'test1' (1 row)\n>\n> 1st pg_logical_slot_get_changes is called with two-phase-commit off,\n> 2nd is called with two-phase-commit on. You can see that the\n> transaction is not decoded at all.\n> For this, I am planning to change the semantics such that\n> two-phase-commit can only be specified while creating the slot using\n> pg_create_logical_replication_slot()\n> and not in pg_logical_slot_get_changes, thus preventing\n> two-phase-commit flag from being toggled between restarts of the\n> decoder.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Feb 2021 09:46:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 20, 2021 at 9:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 19, 2021 at 8:21 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > Based on this suggestion, I have created a patch on HEAD which now\n> > does not allow repeated decoding\n> > of prepared transactions. For this, the code now enforces\n> > full_snapshot if two-phase decoding is enabled.\n> > Do have a look at the patch and see if you have any comments.\n> >\n>\n> Few minor comments:\n> ===================\n> 1.\n> .git/rebase-apply/patch:135: trailing whitespace.\n> * We need to mark the transaction as prepared, so that we\n> don't resend it on\n> warning: 1 line adds whitespace errors.\n>\n> Whitespace issue.\n>\n> 2.\n> /*\n> + * Set snapshot type\n> + */\n> +void\n> +SetSnapBuildType(SnapBuild *builder, bool need_full_snapshot)\n>\n> There is no caller which passes the second parameter as false, so why\n> have it? Can't we have a function with SetSnapBuildFullSnapshot or\n> something like that?\n>\n> 3.\n> @@ -431,6 +431,10 @@ CreateInitDecodingContext(const char *plugin,\n> startup_cb_wrapper(ctx, &ctx->options, true);\n> MemoryContextSwitchTo(old_context);\n>\n> + /* If two-phase is on, then only full snapshot can be used */\n> + if (ctx->twophase)\n> + SetSnapBuildType(ctx->snapshot_builder, true);\n> +\n> ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n>\n> return ctx;\n> @@ -534,6 +538,10 @@ CreateDecodingContext(XLogRecPtr start_lsn,\n>\n> ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n>\n> + /* If two-phase is on, then only full snapshot can be used */\n> + if (ctx->twophase)\n> + SetSnapBuildType(ctx->snapshot_builder, true);\n>\n> I think it is better to add a detailed comment on why we are doing\n> this? You can write the comment in one of the places.\n>\n\nFew more comments:\n==================\n1. I think you need to update the examples in the docs as well [1].\n2. Also the text in the description of begin_prepare_cb [2] needs some\nadjustment. We can say something on lines that if users want they can\ncheck if the same GID exists and then they can either error out or\ntake appropriate action based on their need.\n\n[1] - https://www.postgresql.org/docs/devel/logicaldecoding-example.html\n[2] - https://www.postgresql.org/docs/devel/logicaldecoding-output-plugin.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Feb 2021 14:58:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On 20.02.21 04:38, Amit Kapila wrote:\n> I see a problem with this assumption. During the initial\n> synchronization, this transaction won't be visible to snapshot and we\n> won't copy it. Then later if we won't decode and send it then the\n> replica will be out of sync. Such a problem won't happen with Ajin's\n> patch.\n\nYou are assuming that the initial snapshot is a) logical and b) dumb.\n\nA physical snapshot very well \"sees\" prepared transactions and will \nrestore them to their prepared state. But even in the logical case, I \nthink it's beneficial to keep the decoder simpler and instead require \nsome support for two-phase commit in the initial synchronization logic. \n For example using the following approach (you will recognize \nsimilarities to what snapbuild does):\n\n1.) create the slot\n2.) start to retrieve changes and queue them\n3.) wait for the prepared transactions that were pending at the\n point in time of step 1 to complete\n4.) take a snapshot (by visibility, w/o requiring to \"see\" prepared\n transactions)\n5.) apply the snapshot\n6.) replay the queue, filtering commits already visible in the\n snapshot\n\nJust as with the solution proposed by Ajin and you, this has the danger \nof showing transactions as committed without the effects of the PREPAREs \nbeing \"visible\" (after step 5 but before 6).\n\nHowever, this approach of solving the problem outside of the walsender \nhas two advantages:\n\n* The delay in step 3 can be made visible and dealt with. As there's\n no upper boundary to that delay, it makes sense to e.g. inform the\n user after 10 minutes and provide a list of two-phase transactions\n still in progress.\n\n* Second, it becomes possible to avoid inconsistencies during the\n reconciliation window in between steps 5 and 6 by disallowing\n concurrent (user) transactions to run until after completion of\n step 6.\n\nWhereas the current implementation hides this in the walsender without \nany way to determine how much a PREPARE had been delayed or when \nconsistency has been reached. (Of course, short of using the very same \ninitial snapshotting approach outlined above. For which the reordering \nlogic in the walsender does more harm than good.)\n\nEssentially, I think I'm saying that while I agree that some kind of \nsnapshot synchronization logic is needed, it should live in a different \nplace.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Sat, 20 Feb 2021 11:55:19 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 20, 2021 at 4:25 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 20.02.21 04:38, Amit Kapila wrote:\n> > I see a problem with this assumption. During the initial\n> > synchronization, this transaction won't be visible to snapshot and we\n> > won't copy it. Then later if we won't decode and send it then the\n> > replica will be out of sync. Such a problem won't happen with Ajin's\n> > patch.\n>\n> You are assuming that the initial snapshot is a) logical and b) dumb.\n>\n> A physical snapshot very well \"sees\" prepared transactions and will\n> restore them to their prepared state. But even in the logical case, I\n> think it's beneficial to keep the decoder simpler\n>\n\nI think after the patch Ajin proposed decoders won't need any special\nchecks after receiving the prepared xacts. What additional simplicity\nthis approach will bring? I rather see that we might need to change\nthe exiting initial sync (copy) with additional restrictions to\nsupport two-pc for subscribers.\n\n> and instead require\n> some support for two-phase commit in the initial synchronization logic.\n> For example using the following approach (you will recognize\n> similarities to what snapbuild does):\n>\n> 1.) create the slot\n> 2.) start to retrieve changes and queue them\n> 3.) wait for the prepared transactions that were pending at the\n> point in time of step 1 to complete\n> 4.) take a snapshot (by visibility, w/o requiring to \"see\" prepared\n> transactions)\n> 5.) apply the snapshot\n>\n\nDo you mean to say that after creating the slot we take an additional\npass over WAL (till the LSN where we found a consistent snapshot) to\ncollect all prepared transactions and wait for them to get\ncommitted/rollbacked?\n\n> 6.) replay the queue, filtering commits already visible in the\n> snapshot\n>\n> Just as with the solution proposed by Ajin and you, this has the danger\n> of showing transactions as committed without the effects of the PREPAREs\n> being \"visible\" (after step 5 but before 6).\n>\n\nI think the scheme proposed by you is still not fully clear to me but\ncan you please explain how in the existing proposed patch there is a\ndanger of showing transactions as committed without the effects of the\nPREPAREs being \"visible\"?\n\n\n> However, this approach of solving the problem outside of the walsender\n> has two advantages:\n>\n> * The delay in step 3 can be made visible and dealt with. As there's\n> no upper boundary to that delay, it makes sense to e.g. inform the\n> user after 10 minutes and provide a list of two-phase transactions\n> still in progress.\n>\n> * Second, it becomes possible to avoid inconsistencies during the\n> reconciliation window in between steps 5 and 6 by disallowing\n> concurrent (user) transactions to run until after completion of\n> step 6.\n>\n\nThis second point sounds like a restriction that users might not like.\n\n> Whereas the current implementation hides this in the walsender without\n> any way to determine how much a PREPARE had been delayed or when\n> consistency has been reached. (Of course, short of using the very same\n> initial snapshotting approach outlined above. For which the reordering\n> logic in the walsender does more harm than good.)\n>\n> Essentially, I think I'm saying that while I agree that some kind of\n> snapshot synchronization logic is needed, it should live in a different\n> place.\n>\n\nBut we need something in existing logic in WALSender or somewhere to\nallow supporting 2PC for subscriptions and from your above\ndescription, it is not clear to me how we can achieve that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Feb 2021 17:45:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 19, 2021, at 19:38, Amit Kapila wrote:\n> On Fri, Feb 19, 2021 at 8:23 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> >\n> > With that line of thinking, the point in time (or in WAL) of the COMMIT\n> > PREPARED does not matter at all to reason about the decoding of the\n> > PREPARE operation. Instead, there are only exactly two cases to consider:\n> >\n> > a) the PREPARE happened before the start_decoding_at LSN and must not be\n> > decoded. (But the effects of the PREPARE must then be included in the\n> > initial synchronization. If that's not supported, the output plugin\n> > should not enable two-phase commit.)\n> >\n> \n> I see a problem with this assumption. During the initial\n> synchronization, this transaction won't be visible to snapshot and we\n> won't copy it. Then later if we won't decode and send it then the\n> replica will be out of sync. Such a problem won't happen with Ajin's\n> patch.\n\nWhy isn't the more obvious answer to this to not allow/disable 2pc decoding during the initial sync? You can't really make sense of it before you're synced anyway... \n\nRegards,\n\nAndres\n\n\n", "msg_date": "Sat, 20 Feb 2021 08:55:46 -0800", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-19 13:50:52 +1100, Ajin Cherian wrote:\n> From 129947ab2d0ba223862ed1c87be0f96b51645ba0 Mon Sep 17 00:00:00 2001\n> From: Ajin Cherian <ajinc@fast.au.fujitsu.com>\n> Date: Thu, 18 Feb 2021 20:18:16 -0500\n> Subject: [PATCH] Don't allow repeated decoding of prepared transactions.\n> \n> Enforce full snapshot while decoding with two-phase enabled. This\n> allows the decoder to differentiate between prepared transaction that\n> were sent prior to restart and prepared transactions that were not sent\n> because they were prior to consistent snapshot.\n\nIsn't this an *extremely* expensive solution? Maintaining a full\nsnapshot is pretty darn expensive - so expensive that it's repeatedly\nbeen a problem even just for building the initial snapshot (to the point\nof being inable to do so). And that's typically a comparatively rare\noperation, not something continual - but what you're proposing is a cost\npaid during ongoing replication.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Feb 2021 12:13:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 20, 2021 at 10:26 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On Fri, Feb 19, 2021, at 19:38, Amit Kapila wrote:\n> > On Fri, Feb 19, 2021 at 8:23 PM Markus Wanner\n> > <markus.wanner@enterprisedb.com> wrote:\n> > >\n> > > With that line of thinking, the point in time (or in WAL) of the COMMIT\n> > > PREPARED does not matter at all to reason about the decoding of the\n> > > PREPARE operation. Instead, there are only exactly two cases to consider:\n> > >\n> > > a) the PREPARE happened before the start_decoding_at LSN and must not be\n> > > decoded. (But the effects of the PREPARE must then be included in the\n> > > initial synchronization. If that's not supported, the output plugin\n> > > should not enable two-phase commit.)\n> > >\n> >\n> > I see a problem with this assumption. During the initial\n> > synchronization, this transaction won't be visible to snapshot and we\n> > won't copy it. Then later if we won't decode and send it then the\n> > replica will be out of sync. Such a problem won't happen with Ajin's\n> > patch.\n>\n> Why isn't the more obvious answer to this to not allow/disable 2pc decoding during the initial sync?\n>\n\nHere, I am assuming you are asking to disable 2PC both via\napply-worker and tablesync worker till the initial sync (aka all\ntables are in SUBREL_STATE_READY state) phase is complete. If we do\nthat and what if commit prepared happened after the initial sync phase\nbut prepare happened before that? At Commit prepared because the 2PC\nis enabled, we will just send Commit Prepared without the actual data\nand prepare. Now, to solve that say we remember in TXN that at prepare\ntime 2PC was not enabled so at commit prepared time consider that 2PC\nis disabled for that TXN and send the entire transaction along with\ncommit as we do for non-2PC TXNs. But it is possible that a restart\nmight happen before the commit prepared and then it is possible that\nprepare falls before start_decoding_at point so we will still skip\nsending it even though 2PC is enabled after the restart and just send\nthe commit prepared. So, again that can lead to replica going out of\nsync.\n\nThe other thing related to this is to see to ensure that via SQL APIs\nwe don't skip any prepared xacts and just return commit prepared.\nBasically, the example case, I have described in my email above [1].\n\nOne of the ideas I have previously speculated to overcome these\nchallenges is to someway persist the information of Prepares that are\ndecoded. Say, after sending prepare, we update the slot information on\ndisk to indicate that the particular GID is sent. Then next time\nwhenever we have to skip prepare due to whatever reason, we can check\nthe existence of persistent information on disk for that GID, if it\nexists then we need to send just Commit Prepared, otherwise, the\nentire transaction. We can remove this information during or after\nCheckPointSnapBuild, basically, we can remove the information of all\nGID's that are after cutoff LSN computed via\nReplicationSlotsComputeLogicalRestartLSN. But that seems to be costly\nso we didn't pursue it.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1L5aX1BL9Xg-wSULbFeB417G0v9qk5qZ6NbYCkCo6JUGQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 21 Feb 2021 11:32:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-21 11:32:29 +0530, Amit Kapila wrote:\n> Here, I am assuming you are asking to disable 2PC both via\n> apply-worker and tablesync worker till the initial sync (aka all\n> tables are in SUBREL_STATE_READY state) phase is complete. If we do\n> that and what if commit prepared happened after the initial sync phase\n> but prepare happened before that?\n\nIsn't that pretty easy to detect? You compare the LSN of the tx prepare\nwith the LSN of achieving consistency? Any restart will recover the\nLSNs, because restart_lsn will be before the start of the tx.\n\n\n> At Commit prepared because the 2PC is enabled, we will just send\n> Commit Prepared without the actual data and prepare. Now, to solve\n> that say we remember in TXN that at prepare time 2PC was not enabled\n> so at commit prepared time consider that 2PC is disabled for that TXN\n> and send the entire transaction along with commit as we do for non-2PC\n> TXNs. But it is possible that a restart might happen before the commit\n> prepared and then it is possible that prepare falls before\n> start_decoding_at point so we will still skip sending it even though\n> 2PC is enabled after the restart and just send the commit\n> prepared. So, again that can lead to replica going out of sync.\n\nI don't think that an LSN based approach is susceptible to this. And it\nwouldn't require more memory etc than we'd now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 14:26:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 22, 2021 at 3:56 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-02-21 11:32:29 +0530, Amit Kapila wrote:\n> > Here, I am assuming you are asking to disable 2PC both via\n> > apply-worker and tablesync worker till the initial sync (aka all\n> > tables are in SUBREL_STATE_READY state) phase is complete. If we do\n> > that and what if commit prepared happened after the initial sync phase\n> > but prepare happened before that?\n>\n> Isn't that pretty easy to detect? You compare the LSN of the tx prepare\n> with the LSN of achieving consistency?\n>\n\nI think by LSN of achieving consistency, you mean start_decoding_at\nLSN. It is possible that start_decoding_at point has been moved ahead\nbecause of some other unrelated commit that happens between prepare\nand commit prepared. Something like below:\n\nLSN for Prepare of xact t1 at 500\nLSN for Commit of xact t2 at 520\nLSN for Commit Prepared at 550\n\nSay we skipped prepare because 2PC was not enabled but then decoded\nand sent Commit of xact t2. I think after this start_decoding_at LSN\nwill be at 520. So comparing the prepare LSN of xact t1 with\nstart_decoding_at will lead to skipping the prepare after the restart\nand we will just send the commit prepared without prepare and data\nwhen we process LSN of Commit Prepared at 550.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Feb 2021 08:22:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 08:22:35 +0530, Amit Kapila wrote:\n> On Mon, Feb 22, 2021 at 3:56 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2021-02-21 11:32:29 +0530, Amit Kapila wrote:\n> > > Here, I am assuming you are asking to disable 2PC both via\n> > > apply-worker and tablesync worker till the initial sync (aka all\n> > > tables are in SUBREL_STATE_READY state) phase is complete. If we do\n> > > that and what if commit prepared happened after the initial sync phase\n> > > but prepare happened before that?\n> >\n> > Isn't that pretty easy to detect? You compare the LSN of the tx prepare\n> > with the LSN of achieving consistency?\n> >\n>\n> I think by LSN of achieving consistency, you mean start_decoding_at\n> LSN.\n\nKinda, but not in the way you suggest. I mean the LSN at which the slot\nreached SNAPBUILD_CONSISTENT. Which also is the point in the WAL stream\nwe exported the initial snapshot for.\n\nMy understanding of why you need to have special handling of 2pc PREPARE\nis that the initial snapshot will not contain the contents of the\nprepared transaction, therefore you need to send it out at some point\n(or be incorrect).\n\nYour solution to this is:\n\t/*\n\t * It is possible that this transaction is not decoded at prepare time\n\t * either because by that time we didn't have a consistent snapshot or it\n\t * was decoded earlier but we have restarted. We can't distinguish between\n\t * those two cases so we send the prepare in both the cases and let\n\t * downstream decide whether to process or skip it. We don't need to\n\t * decode the xact for aborts if it is not done already.\n\t */\n\tif (!rbtxn_prepared(txn) && is_commit)\n\nbut IMO this violates a pretty fundamental tenant of how logical\ndecoding is supposed to work, i.e. that data that the client\nacknowledges as having received (via lsn passed to START_REPLICATION)\nshouldn't be sent out again.\n\nWhat I am proposing is to instead track the point at which the slot\ngained consistency - a simple LSN. That way you can change the above\nlogic to instead be\n\nif (txn->final_lsn > snapshot_was_exported_at_lsn)\n ReorderBufferReplay();\nelse\n ...\n\nThat will easily work across restarts, won't lead to sending data twice,\netc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 20:09:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-19 15:53:32 +0100, Markus Wanner wrote:\n> However, more generally speaking, I suspect you are overthinking this. All\n> of the complexity arises because of the assumption that an output plugin\n> receiving and confirming a PREPARE may not be able to persist that first\n> phase of transaction application. Instead, you are trying to somehow\n> resurrect the transactional changes and the prepare at COMMIT PREPARED time\n> and decode it in a deferred way.\n\nThe output plugin should never persist anything. That's the job of the\nclient, not the output plugin. The output plugin simply doesn't have the\ninformation to know whether the client received data and successfully\napplied data or not.\n\n\n> Given the output plugin supports two-phase commit, I argue there must be a\n> good reason for it setting the start_decoding_at LSN to a point in time\n> after a PREPARE. To me that means the output plugin (or its downstream\n> replica) has processed the PREPARE (and the downstream replica did whatever\n> it needed to do on its side in order to make the transaction ready to be\n> committed in a second phase).\n\nThe output plugin doesn't set / influence start_decoding_at (unless you\nwant to count just ERRORing out).\n\n\n> With that line of thinking, the point in time (or in WAL) of the COMMIT\n> PREPARED does not matter at all to reason about the decoding of the PREPARE\n> operation. Instead, there are only exactly two cases to consider:\n> \n> a) the PREPARE happened before the start_decoding_at LSN and must not be\n> decoded. (But the effects of the PREPARE must then be included in the\n> initial synchronization. If that's not supported, the output plugin should\n> not enable two-phase commit.)\n\nI don't think that can be made work without disproportionate\ncomplexity. Especially not in cases where we start to be CONSISTENT\nbased on pre-existing on-disk snapshots.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 20:22:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On 20.02.21 13:15, Amit Kapila wrote:\n> I think after the patch Ajin proposed decoders won't need any special\n> checks after receiving the prepared xacts. What additional simplicity\n> this approach will bring?\n\nThe API becomes clearer in that all PREPAREs are always decoded in WAL \nstream order and are not ever deferred (possibly until after the commits \nof many other transactions). No output plugin will need to check \nagainst this peculiarity, but can rely on WAL ordering of events.\n\n(And if an output plugin does not want prepares to be individual events, \nit should simply not enable two-phase support. That seems like \nsomething the output plugin could even do on a per-transaction basis.)\n\n> Do you mean to say that after creating the slot we take an additional\n> pass over WAL (till the LSN where we found a consistent snapshot) to\n> collect all prepared transactions and wait for them to get\n> committed/rollbacked?\n\nNo. A single pass is enough, the decoder won't need any further change \nbeyond the code removal in my patch.\n\nI'm proposing for the synchronization logic (in e.g. pgoutput) to defer \nthe snapshot taking. So that there's some time in between creating the \nlogical slot (at step 1.) and taking a snapshot (at step 4.). Another \nCATCHUP phase, if you want.\n\nSo that all two-phase commit transactions are delivered via either:\n\n* the transferred snapshot (because their COMMIT PREPARED took place\n before the snapshot was taken in (4)), or\n\n* the decoder stream (because their PREPARE took place after the slot\n was fully created and snapbuilder reached a consistent snapshot)\n\nNo transaction can have PREPAREd before (1) but not committed until \nafter (4), because we waited for all prepared transactions to commit in \nstep (3).\n\n> I think the scheme proposed by you is still not fully clear to me but\n> can you please explain how in the existing proposed patch there is a\n> danger of showing transactions as committed without the effects of the\n> PREPAREs being \"visible\"?\n\nPlease see the `twophase_snapshot` isolation test. The expected output \nthere shows the insert from s1 being committed prior to the prepare of \nthe transaction in s2.\n\nOn a replica applying the stream in that order, a transaction in between \nthese two events would see the results from s1 while still being allowed \nto lock the row that s2 is about to update. Something I'd expect the \nPREPARE to prevent.\n\nThat is (IMO) wrong in `master` and Ajin's patch doesn't correct it. \n(While my patch does, so don't look at my patch for this example.)\n\n>> * Second, it becomes possible to avoid inconsistencies during the\n>> reconciliation window in between steps 5 and 6 by disallowing\n>> concurrent (user) transactions to run until after completion of\n>> step 6.\n> \n> This second point sounds like a restriction that users might not like.\n\n\"It becomes possible\" cannot be a restriction. If a user (or \nreplication solution) wants to allow for these inconsistencies, it still \ncan. I want to make sure that solutions which *want* to prevent \ninconsistencies can be implemented.\n\nYour concern applies to step (3), though. The current approach is \nclearly quicker to restore the backup and start to apply transactions. \nUntil you start to think about reordering the \"early\" commits until \nafter the deferred PREPAREs in the output plugin or on the replica side, \nso as to lock rows by prepared transactions before making other commits \nvisible so as to prevent inconsistencies...\n\n> But we need something in existing logic in WALSender or somewhere to\n> allow supporting 2PC for subscriptions and from your above\n> description, it is not clear to me how we can achieve that?\n\nI agree that some more code is required somewhere, outside of the walsender.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 22 Feb 2021 09:25:17 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On 22.02.21 05:22, Andres Freund wrote:\n> Hi,\n> \n> On 2021-02-19 15:53:32 +0100, Markus Wanner wrote:\n>> However, more generally speaking, I suspect you are overthinking this. All\n>> of the complexity arises because of the assumption that an output plugin\n>> receiving and confirming a PREPARE may not be able to persist that first\n>> phase of transaction application. Instead, you are trying to somehow\n>> resurrect the transactional changes and the prepare at COMMIT PREPARED time\n>> and decode it in a deferred way.\n> \n> The output plugin should never persist anything.\n\nSure, sorry, I was sloppy in formulation. I meant the replica or client \nthat receives the data from the output plugin. Given it asked for \ntwo-phase commits in the output plugin, it clearly is interested in the \nPREPARE.\n\n> That's the job of the\n> client, not the output plugin. The output plugin simply doesn't have the\n> information to know whether the client received data and successfully\n> applied data or not.\n\nExactly. Therefore, it should not randomly reshuffle or reorder \nPREPAREs until after other COMMITs.\n\n> The output plugin doesn't set / influence start_decoding_at (unless you\n> want to count just ERRORing out).\n\nYeah, same sloppiness, sorry.\n\n>> With that line of thinking, the point in time (or in WAL) of the COMMIT\n>> PREPARED does not matter at all to reason about the decoding of the PREPARE\n>> operation. Instead, there are only exactly two cases to consider:\n>>\n>> a) the PREPARE happened before the start_decoding_at LSN and must not be\n>> decoded. (But the effects of the PREPARE must then be included in the\n>> initial synchronization. If that's not supported, the output plugin should\n>> not enable two-phase commit.)\n> \n> I don't think that can be made work without disproportionate\n> complexity. Especially not in cases where we start to be CONSISTENT\n> based on pre-existing on-disk snapshots.\n\nWell, the PREPARE to happen before the start_decoding_at LSN is a case \nthe output plugin needs to deal with. I pointed out why the current way \nof dealing with it clearly is wrong.\n\nWhat issues do you see with the approach I proposed?\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 22 Feb 2021 09:25:52 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 22, 2021 at 9:39 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-02-22 08:22:35 +0530, Amit Kapila wrote:\n> > On Mon, Feb 22, 2021 at 3:56 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2021-02-21 11:32:29 +0530, Amit Kapila wrote:\n> > > > Here, I am assuming you are asking to disable 2PC both via\n> > > > apply-worker and tablesync worker till the initial sync (aka all\n> > > > tables are in SUBREL_STATE_READY state) phase is complete. If we do\n> > > > that and what if commit prepared happened after the initial sync phase\n> > > > but prepare happened before that?\n> > >\n> > > Isn't that pretty easy to detect? You compare the LSN of the tx prepare\n> > > with the LSN of achieving consistency?\n> > >\n> >\n> > I think by LSN of achieving consistency, you mean start_decoding_at\n> > LSN.\n>\n> Kinda, but not in the way you suggest. I mean the LSN at which the slot\n> reached SNAPBUILD_CONSISTENT. Which also is the point in the WAL stream\n> we exported the initial snapshot for.\n>\n\nOkay, that's an interesting idea. I have few questions on this, see below.\n\n> My understanding of why you need to have special handling of 2pc PREPARE\n> is that the initial snapshot will not contain the contents of the\n> prepared transaction, therefore you need to send it out at some point\n> (or be incorrect).\n>\n> Your solution to this is:\n> /*\n> * It is possible that this transaction is not decoded at prepare time\n> * either because by that time we didn't have a consistent snapshot or it\n> * was decoded earlier but we have restarted. We can't distinguish between\n> * those two cases so we send the prepare in both the cases and let\n> * downstream decide whether to process or skip it. We don't need to\n> * decode the xact for aborts if it is not done already.\n> */\n> if (!rbtxn_prepared(txn) && is_commit)\n>\n> but IMO this violates a pretty fundamental tenant of how logical\n> decoding is supposed to work, i.e. that data that the client\n> acknowledges as having received (via lsn passed to START_REPLICATION)\n> shouldn't be sent out again.\n>\n\nI agree that this is not acceptable that is why trying to explore\nother solutions including what you have proposed.\n\n> What I am proposing is to instead track the point at which the slot\n> gained consistency - a simple LSN. That way you can change the above\n> logic to instead be\n>\n> if (txn->final_lsn > snapshot_was_exported_at_lsn)\n> ReorderBufferReplay();\n> else\n> ...\n>\n\nWith this if the prepare is prior to consistent_snapshot\n(snapshot_was_exported_at_lsn)) and commit prepared is after then we\nwon't send the prepare and data. Won't we need to send such prepares?\nIf the condition is other way (if (txn->final_lsn <\nsnapshot_was_exported_at_lsn)) then we would send such prepares?\n\nJust to clarify, after the initial copy, say when we start/restart the\nstreaming and we picked the serialized snapshot of some other\nWALSender, we don't need to use snapshot_was_exported_at_lsn\ncorresponding to the serialized snapshot of some other slot?\n\nI am not sure for the matter of this problem enabling 2PC during\ninitial sync (initial snapshot + copy) matters. Because, if we follow\nthe above, then it should be fine even if 2PC is enabled?\n\n> That will easily work across restarts, won't lead to sending data twice,\n> etc.\n>\n\nYeah, we need to probably store this new point as slot's persistent information.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Feb 2021 14:29:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 09:25:52 +0100, Markus Wanner wrote:\n> What issues do you see with the approach I proposed?\n\nVery significant increase in complexity for initializing a logical\nreplica, because there's no easy way to just use the initial slot.\n\n- Andres\n\n\n", "msg_date": "Mon, 22 Feb 2021 01:25:25 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 14:29:09 +0530, Amit Kapila wrote:\n> On Mon, Feb 22, 2021 at 9:39 AM Andres Freund <andres@anarazel.de> wrote:\n> > What I am proposing is to instead track the point at which the slot\n> > gained consistency - a simple LSN. That way you can change the above\n> > logic to instead be\n> >\n> > if (txn->final_lsn > snapshot_was_exported_at_lsn)\n> > ReorderBufferReplay();\n> > else\n> > ...\n> >\n> \n> With this if the prepare is prior to consistent_snapshot\n> (snapshot_was_exported_at_lsn)) and commit prepared is after then we\n> won't send the prepare and data. Won't we need to send such prepares?\n> If the condition is other way (if (txn->final_lsn <\n> snapshot_was_exported_at_lsn)) then we would send such prepares?\n\nYea, I inverted the condition...\n\n\n> Just to clarify, after the initial copy, say when we start/restart the\n> streaming and we picked the serialized snapshot of some other\n> WALSender, we don't need to use snapshot_was_exported_at_lsn\n> corresponding to the serialized snapshot of some other slot?\n\nCorrect.\n\n\n> Yeah, we need to probably store this new point as slot's persistent information.\n\nShould be fine I think...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Feb 2021 01:27:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 22, 2021 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-02-22 09:25:52 +0100, Markus Wanner wrote:\n> > What issues do you see with the approach I proposed?\n>\n> Very significant increase in complexity for initializing a logical\n> replica, because there's no easy way to just use the initial slot.\n>\n\n+1. The solution proposed by Andres seems to be better than other\nideas we have discussed so far.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Feb 2021 15:11:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 22, 2021 at 8:27 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > Yeah, we need to probably store this new point as slot's persistent information.\n>\n> Should be fine I think...\n\nThis idea looks convincing. I'll write up a patch incorporating these changes.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Tue, 23 Feb 2021 20:36:03 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Feb 22, 2021 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > Yeah, we need to probably store this new point as slot's persistent information.\n>\n> Should be fine I think...\n>\n\nSo, we are in agreement that the above solution will work and we won't\nneed to resend the prepare after the restart. I would like to once\nagain describe few other points which we are discussing in this and\nother thread [1] to see if you or others have any different opinion on\nthose:\n\n1. With respect to SQL APIs, currently 'two-phase-commit' is a plugin\noption so it is possible that the first time when it gets changes\n(pg_logical_slot_get_changes) *without* 2PC enabled it will not get\nthe prepared even though prepare is after consistent snapshot. Now\nnext time during getting changes (pg_logical_slot_get_changes) if the\n2PC option is enabled it will skip prepare because by that time\nstart_decoding_at has been moved. So the user will only get commit\nprepared as shown in the example in the email above [2]. I think it\nmight be better to allow enable/disable of 2PC only at create_slot\ntime. Markus, Ajin, and I seem to be in agreement on this point. I\nthink the same will be true for subscriber-side solution as well.\n\n2. There is a possibility that subscribers miss some prepared xacts.\nLet me explain the problem and solution. Currently, when we create a\nsubscription, we first launch apply-worker and create the main apply\nworker slot and then launch table sync workers as required. Now,\nassume, the apply worker slot is created and after that, we launch\ntablesync worker, which will initiate its slot (sync_slot) creation.\nThen, on the publisher-side, the situation is such that there is a\nprepared transaction that happens before we reach a consistent\nsnapshot for sync_slot.\n\nBecause the WALSender corresponding to apply worker is already running\nso it will be in consistent state, for it, such a prepared xact can be\ndecoded and it will send the same to the subscriber. On the\nsubscriber-side, it can skip applying the data-modification operations\nbecause the corresponding rel is still not in a ready state (see\nshould_apply_changes_for_rel and its callers) simply because the\ncorresponding table sync worker is not finished yet. But prepare will\noccur and it will lead to a prepared transaction on the subscriber.\n\nIn this situation, tablesync worker has skipped prepare because the\nsnapshot was not consistent and then it exited because it is in sync\nwith the apply worker. And apply worker has skipped because tablesync\nwas in-progress. Later when Commit prepared will come, the\napply-worker will simply commit the previously prepared transaction\nand we will never see the prepared transaction data.\n\nFor example, consider below situation:\nLSN of Prepare t1 = 490, tablesync skipped because it was prior to a\nconsistent point\nLSN of Commit t2 = 500\nLSN of commit t3 = 510\nLSN of Commit Prepared t1 = 520.\n\nTablesync worker initially (via copy) got till xact t3 (LSN = 510).\nFor the apply worker, we get all the above LSN's as it is started\nbefore tablesync worker and reached a consistent point before it. In\nthe above example, there is a possibility that we miss applying data\nfor xact t1 as explained in previous paragraphs.\n\nSo, the basic premise is that we can't allow tablesync workers to skip\nprepared transactions (which can be processed by apply worker) and\nprocess later commits.\n\nI have one idea to address this. When we get the first begin (for\nprepared xact) in the apply-worker, we can check if there are any\nrelations in \"not_ready\" state and if so then just wait till all the\nrelations become in sync with the apply worker. This is to avoid that\nany of the tablesync workers might skip prepared xact and we don't\nwant apply worker to also skip the same.\n\nNow, it is possible that some tablesync worker has copied the data and\nmoved the sync position ahead of where the current apply worker's\nposition is. In such a case, we need to process transactions in apply\nworker such that we can process commits if any, and write prepared\ntransactions to file. For prepared transactions, we can take decisions\nonly once the commit prepared for them has arrived.\n\nThe other idea I have thought of for this is to only enable 2PC after\ninitial sync (when both apply worker and tablesync workers are in\nsync) is over but I think that can lead to the problem described in\npoint 1.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1L%3DdhuCRvyDvrXX5wZgc7s1hLRD29CKCK6oaHtVCPgiFA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAFPTHDbbth0XVwf%3DWXcmp%3D_2nU5oNaK4CxetUr22qi1UM5v6rw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Feb 2021 15:23:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Feb 23, 2021 at 8:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> 1. With respect to SQL APIs, currently 'two-phase-commit' is a plugin\n> option so it is possible that the first time when it gets changes\n> (pg_logical_slot_get_changes) *without* 2PC enabled it will not get\n> the prepared even though prepare is after consistent snapshot. Now\n> next time during getting changes (pg_logical_slot_get_changes) if the\n> 2PC option is enabled it will skip prepare because by that time\n> start_decoding_at has been moved. So the user will only get commit\n> prepared as shown in the example in the email above [2]. I think it\n> might be better to allow enable/disable of 2PC only at create_slot\n> time. Markus, Ajin, and I seem to be in agreement on this point. I\n> think the same will be true for subscriber-side solution as well.\n>\n\nAttaching a patch which avoids repeated decoding of prepares using the\napproach suggest by Andres. Added snapshot_was_exported_at_lsn;\nfields in ReplicationSlotPersistentData and SnapBuild which now stores\nthe LSN at which the slot snapshot is exported the time it is created.\nThis patch also modifies the API pg_create_logical_replication_slot()\nto take an extra parameter to enable two-phase commits\nand disables pg_logical_slot_get_changes() from enabling two-phase.\nI plan to split this into two patches next. But do review and let me\nknow if you have any comments.\n\nregards,\nAjin", "msg_date": "Wed, 24 Feb 2021 16:48:20 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 24, 2021 at 4:48 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\n> I plan to split this into two patches next. But do review and let me\n> know if you have any comments.\n\nAttaching an updated patch-set with the changes for\nsnapshot_was_exported_at_lsn separated out from the changes for the\nAPIs pg_create_logical_replication_slot() and\npg_logical_slot_get_changes(). Along with a rebase that takes in a few\nmore commits since my last patch.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 24 Feb 2021 22:36:04 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 24, 2021 at 5:06 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Wed, Feb 24, 2021 at 4:48 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> > I plan to split this into two patches next. But do review and let me\n> > know if you have any comments.\n>\n> Attaching an updated patch-set with the changes for\n> snapshot_was_exported_at_lsn separated out from the changes for the\n> APIs pg_create_logical_replication_slot() and\n> pg_logical_slot_get_changes(). Along with a rebase that takes in a few\n> more commits since my last patch.\n\nOne observation while verifying the patch I noticed that most of\nReplicationSlotPersistentData structure members are displayed in\npg_replication_slots, but I did not see snapshot_was_exported_at_lsn\nbeing displayed. Is this intentional? If not intentional we can\ninclude snapshot_was_exported_at_lsn in pg_replication_slots.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 25 Feb 2021 17:04:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Wed, Feb 24, 2021 at 5:06 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Wed, Feb 24, 2021 at 4:48 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> > I plan to split this into two patches next. But do review and let me\n> > know if you have any comments.\n>\n> Attaching an updated patch-set with the changes for\n> snapshot_was_exported_at_lsn separated out from the changes for the\n> APIs pg_create_logical_replication_slot() and\n> pg_logical_slot_get_changes(). Along with a rebase that takes in a few\n> more commits since my last patch.\n>\n\nFew comments on the first patch:\n1. We can't remove ReorderBufferSkipPrepare because we rely on that in\nSnapBuildDistributeNewCatalogSnapshot.\n2. I have changed the name of the variable from\nsnapshot_was_exported_at_lsn to snapshot_was_exported_at but I am\nstill not very sure about this naming because there are times when we\ndon't export snapshot and we still set this like when creating slots\nwith CRS_NOEXPORT_SNAPSHOT or when creating via SQL APIs. The other\nname that comes to mind is initial_consistency_at, what do you think?\n3. Changed comments at various places.\n\nPlease find the above changes as a separate patch, if you like you can\ninclude these in the main patch.\n\nApart from the above, I think the comments related to docs in my\nprevious email [1] are still valid, can you please take care of those.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Kr34_TiREr57Wpd%3D3%3D03x%3D1n55DAjwJPGpHAEc4dWfUQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 25 Feb 2021 17:06:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Thu, Feb 25, 2021 at 10:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Few comments on the first patch:\n> 1. We can't remove ReorderBufferSkipPrepare because we rely on that in\n> SnapBuildDistributeNewCatalogSnapshot.\n> 2. I have changed the name of the variable from\n> snapshot_was_exported_at_lsn to snapshot_was_exported_at but I am\n> still not very sure about this naming because there are times when we\n> don't export snapshot and we still set this like when creating slots\n> with CRS_NOEXPORT_SNAPSHOT or when creating via SQL APIs. The other\n> name that comes to mind is initial_consistency_at, what do you think?\n> 3. Changed comments at various places.\n>\n> Please find the above changes as a separate patch, if you like you can\n> include these in the main patch.\n>\n> Apart from the above, I think the comments related to docs in my\n> previous email [1] are still valid, can you please take care of those.\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1Kr34_TiREr57Wpd%3D3%3D03x%3D1n55DAjwJPGpHAEc4dWfUQ%40mail.gmail.com\n\nI've added Amit's changes-patch as well as addressed comments related\nto docs in the attached patch.\n\n>On Thu, Feb 25, 2021 at 10:34 PM vignesh C <vignesh21@gmail.com> wrote:\n>One observation while verifying the patch I noticed that most of\n>ReplicationSlotPersistentData structure members are displayed in\n>pg_replication_slots, but I did not see snapshot_was_exported_at_lsn\n>being displayed. Is this intentional? If not intentional we can\n>include snapshot_was_exported_at_lsn in pg_replication_slots.\n\nI've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\nDo have a look and let me know if there are any comments.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 26 Feb 2021 19:47:45 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\n> I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> Do have a look and let me know if there are any comments.\n\nUpdate with both patches.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Fri, 26 Feb 2021 21:42:47 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 26, 2021 at 4:13 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> > I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> > Do have a look and let me know if there are any comments.\n>\n> Update with both patches.\n\nThanks for fixing and providing an updated patch. Patch applies, make\ncheck and make check-world passes. I could see the issue working fine.\n\nFew minor comments:\n+ <structfield>snapshot_was_exported_at</structfield> <type>pg_lsn</type>\n+ </para>\n+ <para>\n+ The address (<literal>LSN</literal>) at which the logical\n+ slot found a consistent point at the time of slot creation.\n+ <literal>NULL</literal> for physical slots.\n+ </para></entry>\n+ </row>\n\n\nI had seen earlier also we had some discussion on naming\nsnapshot_was_exported_at. Can we change snapshot_was_exported_at to\nsnapshot_exported_lsn, I felt if we can include the lsn in the name,\nthe user will be able to interpret easily and also it will be similar\nto other columns in pg_replication_slots view.\n\n\n L.restart_lsn,\n L.confirmed_flush_lsn,\n+ L.snapshot_was_exported_at,\n L.wal_status,\n L.safe_wal_size\n\nLooks like there is some indentation issue here.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Feb 2021 19:26:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 26, 2021 at 7:26 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 4:13 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > > I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> > > Do have a look and let me know if there are any comments.\n> >\n> > Update with both patches.\n>\n> Thanks for fixing and providing an updated patch. Patch applies, make\n> check and make check-world passes. I could see the issue working fine.\n>\n> Few minor comments:\n> + <structfield>snapshot_was_exported_at</structfield> <type>pg_lsn</type>\n> + </para>\n> + <para>\n> + The address (<literal>LSN</literal>) at which the logical\n> + slot found a consistent point at the time of slot creation.\n> + <literal>NULL</literal> for physical slots.\n> + </para></entry>\n> + </row>\n>\n>\n> I had seen earlier also we had some discussion on naming\n> snapshot_was_exported_at. Can we change snapshot_was_exported_at to\n> snapshot_exported_lsn, I felt if we can include the lsn in the name,\n> the user will be able to interpret easily and also it will be similar\n> to other columns in pg_replication_slots view.\n>\n\nI have recommended above to change this name to initial_consistency_at\nbecause there are times when we don't export snapshot and we still set\nthis like when creating slots with CRS_NOEXPORT_SNAPSHOT or when\ncreating via SQL APIs. I am not sure why Ajin neither changed the\nname nor responded to that comment. What is your opinion?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 27 Feb 2021 08:29:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Thu, Feb 25, 2021 at 5:04 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Feb 24, 2021 at 5:06 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Wed, Feb 24, 2021 at 4:48 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > > I plan to split this into two patches next. But do review and let me\n> > > know if you have any comments.\n> >\n> > Attaching an updated patch-set with the changes for\n> > snapshot_was_exported_at_lsn separated out from the changes for the\n> > APIs pg_create_logical_replication_slot() and\n> > pg_logical_slot_get_changes(). Along with a rebase that takes in a few\n> > more commits since my last patch.\n>\n> One observation while verifying the patch I noticed that most of\n> ReplicationSlotPersistentData structure members are displayed in\n> pg_replication_slots, but I did not see snapshot_was_exported_at_lsn\n> being displayed. Is this intentional? If not intentional we can\n> include snapshot_was_exported_at_lsn in pg_replication_slots.\n>\n\nOn thinking about this point, I feel we don't need this new parameter\nin the view because I am not able to see how it is of any use to the\nuser. Over time, corresponding to that LSN there won't be any WAL\nrecord or maybe WAL would be overwritten. I think this is primarily\nfor our internal use so let's not expose it. I intend to remove it\nfrom the patch unless you have some reason for exposing this to the\nuser.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 27 Feb 2021 09:32:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, 27 Feb, 2021, 1:59 pm Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\n>\n> I have recommended above to change this name to initial_consistency_at\n> because there are times when we don't export snapshot and we still set\n> this like when creating slots with CRS_NOEXPORT_SNAPSHOT or when\n> creating via SQL APIs. I am not sure why Ajin neither changed the\n> name nor responded to that comment. What is your opinion?\n>\n\nI am fine with the name initial_consistency_at. I am also fine with not\nshowing this in the pg_replication_slot view and keeping this internal.\n\nRegards,\nAjin Cherian\nFujitsu Australia\n\n>\n>\n\nOn Sat, 27 Feb, 2021, 1:59 pm Amit Kapila, <amit.kapila16@gmail.com> wrote:\n\nI have recommended above to change this name to initial_consistency_at\nbecause there are times when we don't export snapshot and we still set\nthis like when creating slots with CRS_NOEXPORT_SNAPSHOT or when\ncreating via SQL APIs.  I am not sure why Ajin neither changed the\nname nor responded to that comment. What is your opinion?I am fine with the name initial_consistency_at. I am also fine with not showing this in the pg_replication_slot view and keeping this internal.Regards,Ajin CherianFujitsu Australia", "msg_date": "Sat, 27 Feb 2021 17:02:17 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Fri, Feb 26, 2021 at 4:13 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> > I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> > Do have a look and let me know if there are any comments.\n>\n> Update with both patches.\n>\n\nThanks, I have made some minor changes to the first patch and now it\nlooks good to me. The changes are as below:\n1. Removed the changes related to exposing this new parameter via view\nas mentioned in my previous email.\n2. Changed the variable name initial_consistent_point.\n3. Ran pgindent, minor changes in comments, and modified the commit message.\n\nLet me know what you think about these changes.\n\nNext, I'll review your second patch which allows the 2PC option to be\nenabled only at slot creation time.\n\n\n--\nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 27 Feb 2021 11:38:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 27, 2021 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 4:13 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > > I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> > > Do have a look and let me know if there are any comments.\n> >\n> > Update with both patches.\n> >\n>\n> Thanks, I have made some minor changes to the first patch and now it\n> looks good to me. The changes are as below:\n> 1. Removed the changes related to exposing this new parameter via view\n> as mentioned in my previous email.\n> 2. Changed the variable name initial_consistent_point.\n> 3. Ran pgindent, minor changes in comments, and modified the commit message.\n>\n> Let me know what you think about these changes.\n>\n\nIn the attached, I have just bumped SNAPBUILD_VERSION as we are\nadding a new member in the SnapBuild structure.\n\n> Next, I'll review your second patch which allows the 2PC option to be\n> enabled only at slot creation time.\n>\n\nFew comments on 0002 patch:\n=========================\n1.\n+\n+ /*\n+ * Disable two-phase here, it will be set in the core if it was\n+ * enabled whole creating the slot.\n+ */\n+ ctx->twophase = false;\n\nTypo, /whole/while. I think we don't need to initialize this variable\nhere at all.\n\n2.\n+ /* If twophase is set on the slot at create time, then\n+ * make sure the field in the context is also updated\n+ */\n+ if (MyReplicationSlot->data.twophase)\n+ {\n+ ctx->twophase = true;\n+ }\n+\n\nFor multi-line comments, the first line of comment should be empty.\nAlso, I think this is not the right place because the WALSender path\nneeds to set it separately. I guess you can set it in\nCreateInitDecodingContext/CreateDecodingContext by doing something\nlike\n\nctx->twophase &= MyReplicationSlot->data.twophase\n\n3. I think we can support this option at the protocol level in a\nseparate patch where we need to allow it via replication commands (say\nwhen we support it in CreateSubscription). Right now, there is nothing\nto test all the code you have added in repl_gram.y.\n\n4. I think we can expose this new option via pg_replication_slots.\n\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 27 Feb 2021 17:36:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 27, 2021 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 7:26 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, Feb 26, 2021 at 4:13 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > > I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> > > > Do have a look and let me know if there are any comments.\n> > >\n> > > Update with both patches.\n> >\n> > Thanks for fixing and providing an updated patch. Patch applies, make\n> > check and make check-world passes. I could see the issue working fine.\n> >\n> > Few minor comments:\n> > + <structfield>snapshot_was_exported_at</structfield> <type>pg_lsn</type>\n> > + </para>\n> > + <para>\n> > + The address (<literal>LSN</literal>) at which the logical\n> > + slot found a consistent point at the time of slot creation.\n> > + <literal>NULL</literal> for physical slots.\n> > + </para></entry>\n> > + </row>\n> >\n> >\n> > I had seen earlier also we had some discussion on naming\n> > snapshot_was_exported_at. Can we change snapshot_was_exported_at to\n> > snapshot_exported_lsn, I felt if we can include the lsn in the name,\n> > the user will be able to interpret easily and also it will be similar\n> > to other columns in pg_replication_slots view.\n> >\n>\n> I have recommended above to change this name to initial_consistency_at\n> because there are times when we don't export snapshot and we still set\n> this like when creating slots with CRS_NOEXPORT_SNAPSHOT or when\n> creating via SQL APIs. I am not sure why Ajin neither changed the\n> name nor responded to that comment. What is your opinion?\n\ninitial_consistency_at looks good to me. That is more understandable.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Feb 2021 20:25:05 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 27, 2021 at 5:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 27, 2021 at 11:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 26, 2021 at 4:13 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 26, 2021 at 7:47 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > > I've updated snapshot_was_exported_at_ member to pg_replication_slots as well.\n> > > > Do have a look and let me know if there are any comments.\n> > >\n> > > Update with both patches.\n> > >\n> >\n> > Thanks, I have made some minor changes to the first patch and now it\n> > looks good to me. The changes are as below:\n> > 1. Removed the changes related to exposing this new parameter via view\n> > as mentioned in my previous email.\n> > 2. Changed the variable name initial_consistent_point.\n> > 3. Ran pgindent, minor changes in comments, and modified the commit message.\n> >\n> > Let me know what you think about these changes.\n> >\n>\n> In the attached, I have just bumped SNAPBUILD_VERSION as we are\n> adding a new member in the SnapBuild structure.\n>\n\nFew minor comments:\n\ngit am v6-0001-Avoid-repeated-decoding-of-prepared-transactions-.patch\nApplying: Avoid repeated decoding of prepared transactions after the restart.\n/home/vignesh/postgres/.git/rebase-apply/patch:286: trailing whitespace.\n#define SNAPBUILD_VERSION 4\nwarning: 1 line adds whitespace errors.\n\nThere is one whitespace error.\n\nIn commit a271a1b50e, we allowed decoding at prepare time and the prepare\nwas decoded again if there is a restart after decoding it. It was done\nthat way because we can't distinguish between the cases where we have not\ndecoded the prepare because it was prior to consistent snapshot or we have\ndecoded it earlier but restarted. To distinguish between these two cases,\nwe have introduced an initial_consisten_point at the slot level which is\nan LSN at which we found a consistent point at the time of slot creation.\n\nOne minor typo in commit message, initial_consisten_point should be\ninitial_consistent_point\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Feb 2021 20:34:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Sat, Feb 27, 2021 at 11:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Few comments on 0002 patch:\n> =========================\n> 1.\n> +\n> + /*\n> + * Disable two-phase here, it will be set in the core if it was\n> + * enabled whole creating the slot.\n> + */\n> + ctx->twophase = false;\n>\n> Typo, /whole/while. I think we don't need to initialize this variable\n> here at all.\n>\n> 2.\n> + /* If twophase is set on the slot at create time, then\n> + * make sure the field in the context is also updated\n> + */\n> + if (MyReplicationSlot->data.twophase)\n> + {\n> + ctx->twophase = true;\n> + }\n> +\n>\n> For multi-line comments, the first line of comment should be empty.\n> Also, I think this is not the right place because the WALSender path\n> needs to set it separately. I guess you can set it in\n> CreateInitDecodingContext/CreateDecodingContext by doing something\n> like\n>\n> ctx->twophase &= MyReplicationSlot->data.twophase\n\nUpdated accordingly.\n\n>\n> 3. I think we can support this option at the protocol level in a\n> separate patch where we need to allow it via replication commands (say\n> when we support it in CreateSubscription). Right now, there is nothing\n> to test all the code you have added in repl_gram.y.\n>\n\nRemoved that.\n\n\n> 4. I think we can expose this new option via pg_replication_slots.\n>\n\nDone. Added,\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Mon, 1 Mar 2021 12:53:17 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Mar 1, 2021 at 7:23 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Sat, Feb 27, 2021 at 11:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Few comments on 0002 patch:\n> > =========================\n> > 1.\n> > +\n> > + /*\n> > + * Disable two-phase here, it will be set in the core if it was\n> > + * enabled whole creating the slot.\n> > + */\n> > + ctx->twophase = false;\n> >\n> > Typo, /whole/while. I think we don't need to initialize this variable\n> > here at all.\n> >\n> > 2.\n> > + /* If twophase is set on the slot at create time, then\n> > + * make sure the field in the context is also updated\n> > + */\n> > + if (MyReplicationSlot->data.twophase)\n> > + {\n> > + ctx->twophase = true;\n> > + }\n> > +\n> >\n> > For multi-line comments, the first line of comment should be empty.\n> > Also, I think this is not the right place because the WALSender path\n> > needs to set it separately. I guess you can set it in\n> > CreateInitDecodingContext/CreateDecodingContext by doing something\n> > like\n> >\n> > ctx->twophase &= MyReplicationSlot->data.twophase\n>\n> Updated accordingly.\n>\n> >\n> > 3. I think we can support this option at the protocol level in a\n> > separate patch where we need to allow it via replication commands (say\n> > when we support it in CreateSubscription). Right now, there is nothing\n> > to test all the code you have added in repl_gram.y.\n> >\n>\n> Removed that.\n>\n>\n> > 4. I think we can expose this new option via pg_replication_slots.\n> >\n>\n> Done. Added,\n>\n\nv7-0002-Add-option-to-enable-two-phase-commits-in-pg_crea.patch adds\ntwophase to pg_create_logical_replication_slot, I feel this option\nshould be documented in src/sgml/func.sgml.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 1 Mar 2021 10:30:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Mar 1, 2021 at 7:23 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n\nPushed, the first patch in the series.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Mar 2021 11:31:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Mar 1, 2021 at 7:23 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\nFew minor comments on 0002 patch\n=============================\n1.\n ctx->streaming &= enable_streaming;\n- ctx->twophase &= enable_twophase;\n+\n }\n\nSpurious line addition.\n\n2.\n- proallargtypes =>\n'{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8}',\n- proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o}',\n- proargnames =>\n'{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size}',\n+ proallargtypes =>\n'{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8,bool}',\n+ proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n+ proargnames =>\n'{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size,twophase}',\n prosrc => 'pg_get_replication_slots' },\n { oid => '3786', descr => 'set up a logical replication slot',\n proname => 'pg_create_logical_replication_slot', provolatile => 'v',\n- proparallel => 'u', prorettype => 'record', proargtypes => 'name name bool',\n- proallargtypes => '{name,name,bool,name,pg_lsn}',\n- proargmodes => '{i,i,i,o,o}',\n- proargnames => '{slot_name,plugin,temporary,slot_name,lsn}',\n+ proparallel => 'u', prorettype => 'record', proargtypes => 'name\nname bool bool',\n+ proallargtypes => '{name,name,bool,bool,name,pg_lsn}',\n+ proargmodes => '{i,i,i,i,o,o}',\n+ proargnames => '{slot_name,plugin,temporary,twophase,slot_name,lsn}',\n\nI think it is better to use two_phase here and at other places as well\nto be consistent with similar parameters.\n\n3.\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -894,7 +894,8 @@ CREATE VIEW pg_replication_slots AS\n L.restart_lsn,\n L.confirmed_flush_lsn,\n L.wal_status,\n- L.safe_wal_size\n+ L.safe_wal_size,\n+ L.twophase\n FROM pg_get_replication_slots() AS L\n\nIndentation issue. Here, you need you spaces instead of tabs.\n\n4.\n@@ -533,6 +533,12 @@ CreateDecodingContext(XLogRecPtr start_lsn,\n\n ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n\n+ /*\n+ * If twophase is set on the slot at create time, then\n+ * make sure the field in the context is also updated.\n+ */\n+ ctx->twophase &= MyReplicationSlot->data.twophase;\n+\n\nWhy didn't you made similar change in CreateInitDecodingContext when I\nalready suggested the same in my previous email? If we don't make that\nchange then during slot initialization two_phase will always be true\neven though user passed in as false. It looks inconsistent and even\nthough there is no direct problem due to that but it could be cause of\npossible problem in future.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Mar 2021 14:38:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Mon, Mar 1, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Few minor comments on 0002 patch\n> =============================\n> 1.\n> ctx->streaming &= enable_streaming;\n> - ctx->twophase &= enable_twophase;\n> +\n> }\n>\n> Spurious line addition.\n\nDeleted.\n\n>\n> 2.\n> - proallargtypes =>\n> '{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8}',\n> - proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> - proargnames =>\n> '{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size}',\n> + proallargtypes =>\n> '{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8,bool}',\n> + proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> + proargnames =>\n> '{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size,twophase}',\n> prosrc => 'pg_get_replication_slots' },\n> { oid => '3786', descr => 'set up a logical replication slot',\n> proname => 'pg_create_logical_replication_slot', provolatile => 'v',\n> - proparallel => 'u', prorettype => 'record', proargtypes => 'name name bool',\n> - proallargtypes => '{name,name,bool,name,pg_lsn}',\n> - proargmodes => '{i,i,i,o,o}',\n> - proargnames => '{slot_name,plugin,temporary,slot_name,lsn}',\n> + proparallel => 'u', prorettype => 'record', proargtypes => 'name\n> name bool bool',\n> + proallargtypes => '{name,name,bool,bool,name,pg_lsn}',\n> + proargmodes => '{i,i,i,i,o,o}',\n> + proargnames => '{slot_name,plugin,temporary,twophase,slot_name,lsn}',\n>\n> I think it is better to use two_phase here and at other places as well\n> to be consistent with similar parameters.\n\nUpdated as requested.\n>\n> 3.\n> --- a/src/backend/catalog/system_views.sql\n> +++ b/src/backend/catalog/system_views.sql\n> @@ -894,7 +894,8 @@ CREATE VIEW pg_replication_slots AS\n> L.restart_lsn,\n> L.confirmed_flush_lsn,\n> L.wal_status,\n> - L.safe_wal_size\n> + L.safe_wal_size,\n> + L.twophase\n> FROM pg_get_replication_slots() AS L\n>\n> Indentation issue. Here, you need you spaces instead of tabs.\n\nUpdated.\n>\n> 4.\n> @@ -533,6 +533,12 @@ CreateDecodingContext(XLogRecPtr start_lsn,\n>\n> ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n>\n> + /*\n> + * If twophase is set on the slot at create time, then\n> + * make sure the field in the context is also updated.\n> + */\n> + ctx->twophase &= MyReplicationSlot->data.twophase;\n> +\n>\n> Why didn't you made similar change in CreateInitDecodingContext when I\n> already suggested the same in my previous email? If we don't make that\n> change then during slot initialization two_phase will always be true\n> even though user passed in as false. It looks inconsistent and even\n> though there is no direct problem due to that but it could be cause of\n> possible problem in future.\n\nUpdated.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 2 Mar 2021 12:07:06 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Mar 2, 2021 at 6:37 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Mon, Mar 1, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Few minor comments on 0002 patch\n> > =============================\n> > 1.\n> > ctx->streaming &= enable_streaming;\n> > - ctx->twophase &= enable_twophase;\n> > +\n> > }\n> >\n> > Spurious line addition.\n>\n> Deleted.\n>\n> >\n> > 2.\n> > - proallargtypes =>\n> > '{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8}',\n> > - proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> > - proargnames =>\n> > '{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size}',\n> > + proallargtypes =>\n> > '{name,name,text,oid,bool,bool,int4,xid,xid,pg_lsn,pg_lsn,text,int8,bool}',\n> > + proargmodes => '{o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> > + proargnames =>\n> > '{slot_name,plugin,slot_type,datoid,temporary,active,active_pid,xmin,catalog_xmin,restart_lsn,confirmed_flush_lsn,wal_status,safe_wal_size,twophase}',\n> > prosrc => 'pg_get_replication_slots' },\n> > { oid => '3786', descr => 'set up a logical replication slot',\n> > proname => 'pg_create_logical_replication_slot', provolatile => 'v',\n> > - proparallel => 'u', prorettype => 'record', proargtypes => 'name name bool',\n> > - proallargtypes => '{name,name,bool,name,pg_lsn}',\n> > - proargmodes => '{i,i,i,o,o}',\n> > - proargnames => '{slot_name,plugin,temporary,slot_name,lsn}',\n> > + proparallel => 'u', prorettype => 'record', proargtypes => 'name\n> > name bool bool',\n> > + proallargtypes => '{name,name,bool,bool,name,pg_lsn}',\n> > + proargmodes => '{i,i,i,i,o,o}',\n> > + proargnames => '{slot_name,plugin,temporary,twophase,slot_name,lsn}',\n> >\n> > I think it is better to use two_phase here and at other places as well\n> > to be consistent with similar parameters.\n>\n> Updated as requested.\n> >\n> > 3.\n> > --- a/src/backend/catalog/system_views.sql\n> > +++ b/src/backend/catalog/system_views.sql\n> > @@ -894,7 +894,8 @@ CREATE VIEW pg_replication_slots AS\n> > L.restart_lsn,\n> > L.confirmed_flush_lsn,\n> > L.wal_status,\n> > - L.safe_wal_size\n> > + L.safe_wal_size,\n> > + L.twophase\n> > FROM pg_get_replication_slots() AS L\n> >\n> > Indentation issue. Here, you need you spaces instead of tabs.\n>\n> Updated.\n> >\n> > 4.\n> > @@ -533,6 +533,12 @@ CreateDecodingContext(XLogRecPtr start_lsn,\n> >\n> > ctx->reorder->output_rewrites = ctx->options.receive_rewrites;\n> >\n> > + /*\n> > + * If twophase is set on the slot at create time, then\n> > + * make sure the field in the context is also updated.\n> > + */\n> > + ctx->twophase &= MyReplicationSlot->data.twophase;\n> > +\n> >\n> > Why didn't you made similar change in CreateInitDecodingContext when I\n> > already suggested the same in my previous email? If we don't make that\n> > change then during slot initialization two_phase will always be true\n> > even though user passed in as false. It looks inconsistent and even\n> > though there is no direct problem due to that but it could be cause of\n> > possible problem in future.\n>\n> Updated.\n>\n\nI have a minor comment regarding the below:\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>two_phase</structfield> <type>bool</type>\n+ </para>\n+ <para>\n+ True if two-phase commits are enabled on this slot.\n+ </para></entry>\n+ </row>\n\nCan we change something like:\nTrue if the slot is enabled for decoding prepared transaction\ninformation. Refer link for more information.(link should point where\nmore detailed information is available for two-phase in\npg_create_logical_replication_slot).\n\nAlso there is one small indentation in that line, I think there should\nbe one space before \"True if....\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 2 Mar 2021 08:20:33 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Mar 2, 2021 at 8:20 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n>\n> I have a minor comment regarding the below:\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>two_phase</structfield> <type>bool</type>\n> + </para>\n> + <para>\n> + True if two-phase commits are enabled on this slot.\n> + </para></entry>\n> + </row>\n>\n> Can we change something like:\n> True if the slot is enabled for decoding prepared transaction\n> information. Refer link for more information.(link should point where\n> more detailed information is available for two-phase in\n> pg_create_logical_replication_slot).\n>\n> Also there is one small indentation in that line, I think there should\n> be one space before \"True if....\".\n>\n\nOkay, fixed these but I added a slightly different description. I have\nalso added the parameter description for\npg_create_logical_replication_slot in docs and changed the comments at\nvarious places in the code. Apart from that ran pgindent. The patch\nlooks good to me now. Let me know what do you think?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 2 Mar 2021 09:33:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Mar 2, 2021 at 3:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\nOne minor comment:\n+ </para>\n+ <para>\n+ True if the slot is enabled for decoding prepared transactions. Always\n+ false for physical slots.\n+ </para></entry>\n+ </row>\n\nThere is an extra space before Always. But when rendered in html this\nis not seen, so this might not be a problem.\n\nOther than that no more comments about the patch. Looks good.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n", "msg_date": "Tue, 2 Mar 2021 16:08:34 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Mar 2, 2021 at 10:38 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 3:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> One minor comment:\n> + </para>\n> + <para>\n> + True if the slot is enabled for decoding prepared transactions. Always\n> + false for physical slots.\n> + </para></entry>\n> + </row>\n>\n> There is an extra space before Always. But when rendered in html this\n> is not seen, so this might not be a problem.\n>\n\nI am just trying to be consistent with the nearby description. For example, see:\n\"The number of bytes that can be written to WAL such that this slot is\nnot in danger of getting in state \"lost\". It is NULL for lost slots,\nas well as if <varname>max_slot_wal_keep_size</varname> is\n<literal>-1</literal>.\"\n\nIn Pg docs, comments, you will find that there are places where we use\na single space before the new line and also places where we use two\nspaces. In this case, for the sake of consistency with the nearby\ndescription, I used two spaces.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 Mar 2021 10:58:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Mar 2, 2021 at 9:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 8:20 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> >\n> > I have a minor comment regarding the below:\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>two_phase</structfield> <type>bool</type>\n> > + </para>\n> > + <para>\n> > + True if two-phase commits are enabled on this slot.\n> > + </para></entry>\n> > + </row>\n> >\n> > Can we change something like:\n> > True if the slot is enabled for decoding prepared transaction\n> > information. Refer link for more information.(link should point where\n> > more detailed information is available for two-phase in\n> > pg_create_logical_replication_slot).\n> >\n> > Also there is one small indentation in that line, I think there should\n> > be one space before \"True if....\".\n> >\n>\n> Okay, fixed these but I added a slightly different description. I have\n> also added the parameter description for\n> pg_create_logical_replication_slot in docs and changed the comments at\n> various places in the code. Apart from that ran pgindent. The patch\n> looks good to me now. Let me know what do you think?\n\nPatch applies cleanly, make check and make check-world passes. I did\nnot find any other issue. The patch looks good to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 2 Mar 2021 12:43:46 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" }, { "msg_contents": "On Tue, Mar 2, 2021 at 12:43 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 9:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 2, 2021 at 8:20 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > >\n> > > I have a minor comment regarding the below:\n> > > + <row>\n> > > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > + <structfield>two_phase</structfield> <type>bool</type>\n> > > + </para>\n> > > + <para>\n> > > + True if two-phase commits are enabled on this slot.\n> > > + </para></entry>\n> > > + </row>\n> > >\n> > > Can we change something like:\n> > > True if the slot is enabled for decoding prepared transaction\n> > > information. Refer link for more information.(link should point where\n> > > more detailed information is available for two-phase in\n> > > pg_create_logical_replication_slot).\n> > >\n> > > Also there is one small indentation in that line, I think there should\n> > > be one space before \"True if....\".\n> > >\n> >\n> > Okay, fixed these but I added a slightly different description. I have\n> > also added the parameter description for\n> > pg_create_logical_replication_slot in docs and changed the comments at\n> > various places in the code. Apart from that ran pgindent. The patch\n> > looks good to me now. Let me know what do you think?\n>\n> Patch applies cleanly, make check and make check-world passes. I did\n> not find any other issue. The patch looks good to me.\n>\n\nThanks, I have pushed this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Mar 2021 09:12:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: repeated decoding of prepared transactions" } ]
[ { "msg_contents": "Hi Hackers\n\nWhen reading code related ECPG I found 75220fb was committed in PG13 and master.\nI don't know why it shouldn't be backpatched in PG12 or before.\nCan anyone take a look at this and kindly tell me why.\n\nRegards,\nTang\n\n\n\n\n", "msg_date": "Mon, 8 Feb 2021 09:42:53 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "Made ecpg compatibility mode and run-time behaviour options case\n insensitive" }, { "msg_contents": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com> writes:\n> When reading code related ECPG I found 75220fb was committed in PG13 and master.\n> I don't know why it shouldn't be backpatched in PG12 or before.\n> Can anyone take a look at this and kindly tell me why.\n\nWe don't usually back-patch things that aren't clear bug fixes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Feb 2021 11:14:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Made ecpg compatibility mode and run-time behaviour options case\n insensitive" }, { "msg_contents": "From: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Tuesday, February 9, 2021 1:14 AM\n\n>> When reading code related ECPG I found 75220fb was committed in PG13 and master.\n>> I don't know why it shouldn't be backpatched in PG12 or before.\n>> Can anyone take a look at this and kindly tell me why.\n>\n>We don't usually back-patch things that aren't clear bug fixes.\n\nThanks for your kindly explanation. Get it now.\n\nRegards,\nTang\n\n\n\n\n\n", "msg_date": "Tue, 9 Feb 2021 00:42:18 +0000", "msg_from": "\"Tang, Haiying\" <tanghy.fnst@cn.fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Made ecpg compatibility mode and run-time behaviour options case\n insensitive" } ]
[ { "msg_contents": "Hi, hackers!\n\nIt seems that if btree index with a unique constraint is corrupted by\nduplicates, amcheck now can not catch this. Reindex becomes impossible as\nit throws an error but otherwise the index will let the user know that it\nis corrupted, and amcheck will tell that the index is clean. So I'd like to\npropose a short patch to improve amcheck for checking the unique\nconstraint. It will output tid's of tuples that are duplicated in the index\n(i.e. more than one tid for the same index key is visible) and the user can\neasily investigate and delete corresponding table entries.\n\n0001 - is the actual patch, and\n0002 - is a temporary hack for testing. It will allow inserting duplicates\nin a table even if an index with the exact name \"idx\" has a unique\nconstraint (generally it is prohibited to insert). Then a new amcheck will\ntell us about these duplicates. It's pity but testing can not be done\nautomatically, as it needs a core recompile. For testing I'd recommend a\nprotocol similar to the following:\n\n- Apply patch 0002\n- Set autovaccum = off in postgresql.conf\n\n\n\n*create table tbl2 (a varchar(50), b varchar(150), c bytea, d\nvarchar(50));create unique index idx on tbl2(a,b);insert into tbl2 select\ni::text::varchar, i::text::varchar, i::text::bytea, i::text::varchar from\ngenerate_series(0,3) as i, generate_series(0,10000) as x;*\n\nSo we'll have a generous amount of duplicates in the table and index. Then:\n\n*create extension amcheck;*\n*select bt_index_check('idx', true);*\n\nThere will be output about each pair of duplicates: index tid's, position\nin a posting list (if the index item is deduplicated) and table tid's.\n\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 218 and posting 219 (point to heap tid=(126,93) and tid=(126,94))\npage lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 219 and posting 220 (point to heap tid=(126,94) and tid=(126,95))\npage lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 220 and posting 221 (point to heap tid=(126,95) and tid=(126,96))\npage lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 221 and tid=(26,7) posting 0 (point to heap tid=(126,96) and\ntid=(126,97)) page lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\nposting 0 and posting 1 (point to heap tid=(126,97) and tid=(126,98)) page\nlsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\nposting 1 and posting 2 (point to heap tid=(126,98) and tid=(126,99)) page\nlsn=0/1B3D420.\n\nThings to notice in the test output:\n- cross-page duplicates (when some of them on the one index page and the\nother are on the next). (Under patch 0002 they are marked by an additional\nmessage \"*INFO: cross page equal keys\"* to catch them among the other)\n\n- If we delete table entries corresponding to some duplicates in between\nthe other duplicates (vacuum should be off), the message for this entry\nshould disappear but the other duplicates should be detected as previous.\n\n*delete from tbl2 where ctid::text='(126,94)';*\n*select bt_index_check('idx', true);*\n\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 218 and posting 220 (point to heap tid=(126,93) and tid=(126,95))\npage lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 220 and posting 221 (point to heap tid=(126,95) and tid=(126,96))\npage lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\nposting 221 and tid=(26,7) posting 0 (point to heap tid=(126,96) and\ntid=(126,97)) page lsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\nposting 0 and posting 1 (point to heap tid=(126,97) and tid=(126,98)) page\nlsn=0/1B3D420.\nWARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\nposting 1 and posting 2 (point to heap tid=(126,98) and tid=(126,99)) page\nlsn=0/1B3D420.\n\nCaveat: if the first entry on the next index page has a key equal to the\nkey on a previous page AND all heap tid's corresponding to this entry are\ninvisible, currently cross-page check can not detect unique constraint\nviolation between previous index page entry and 2nd, 3d and next current\nindex page entries. In this case, there would be a message that recommends\ndoing VACUUM to remove the invisible entries from the index and repeat the\ncheck. (Generally, it is recommended to do vacuum before the check, but for\nthe testing purpose I'd recommend turning it off to check the detection of\nvisible-invisible-visible duplicates scenarios)\n\nYour feedback is very much welcome, as usual.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Mon, 8 Feb 2021 14:46:18 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, 8 Feb 2021, 14:46 Pavel Borisov <pashkin.elfe@gmail.com wrote:\n\n> Hi, hackers!\n>\n> It seems that if btree index with a unique constraint is corrupted by\n> duplicates, amcheck now can not catch this. Reindex becomes impossible as\n> it throws an error but otherwise the index will let the user know that it\n> is corrupted, and amcheck will tell that the index is clean. So I'd like to\n> propose a short patch to improve amcheck for checking the unique\n> constraint. It will output tid's of tuples that are duplicated in the index\n> (i.e. more than one tid for the same index key is visible) and the user can\n> easily investigate and delete corresponding table entries.\n>\n> 0001 - is the actual patch, and\n> 0002 - is a temporary hack for testing. It will allow inserting duplicates\n> in a table even if an index with the exact name \"idx\" has a unique\n> constraint (generally it is prohibited to insert). Then a new amcheck will\n> tell us about these duplicates. It's pity but testing can not be done\n> automatically, as it needs a core recompile. For testing I'd recommend a\n> protocol similar to the following:\n>\n> - Apply patch 0002\n> - Set autovaccum = off in postgresql.conf\n>\n>\n>\n> *create table tbl2 (a varchar(50), b varchar(150), c bytea, d\n> varchar(50));create unique index idx on tbl2(a,b);insert into tbl2 select\n> i::text::varchar, i::text::varchar, i::text::bytea, i::text::varchar from\n> generate_series(0,3) as i, generate_series(0,10000) as x;*\n>\n> So we'll have a generous amount of duplicates in the table and index. Then:\n>\n> *create extension amcheck;*\n> *select bt_index_check('idx', true);*\n>\n> There will be output about each pair of duplicates: index tid's, position\n> in a posting list (if the index item is deduplicated) and table tid's.\n>\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 218 and posting 219 (point to heap tid=(126,93) and tid=(126,94))\n> page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 219 and posting 220 (point to heap tid=(126,94) and tid=(126,95))\n> page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 220 and posting 221 (point to heap tid=(126,95) and tid=(126,96))\n> page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 221 and tid=(26,7) posting 0 (point to heap tid=(126,96) and\n> tid=(126,97)) page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\n> posting 0 and posting 1 (point to heap tid=(126,97) and tid=(126,98)) page\n> lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\n> posting 1 and posting 2 (point to heap tid=(126,98) and tid=(126,99)) page\n> lsn=0/1B3D420.\n>\n> Things to notice in the test output:\n> - cross-page duplicates (when some of them on the one index page and the\n> other are on the next). (Under patch 0002 they are marked by an additional\n> message \"*INFO: cross page equal keys\"* to catch them among the other)\n>\n> - If we delete table entries corresponding to some duplicates in between\n> the other duplicates (vacuum should be off), the message for this entry\n> should disappear but the other duplicates should be detected as previous.\n>\n> *delete from tbl2 where ctid::text='(126,94)';*\n> *select bt_index_check('idx', true);*\n>\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 218 and posting 220 (point to heap tid=(126,93) and tid=(126,95))\n> page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 220 and posting 221 (point to heap tid=(126,95) and tid=(126,96))\n> page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,6)\n> posting 221 and tid=(26,7) posting 0 (point to heap tid=(126,96) and\n> tid=(126,97)) page lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\n> posting 0 and posting 1 (point to heap tid=(126,97) and tid=(126,98)) page\n> lsn=0/1B3D420.\n> WARNING: index uniqueness is violated for index \"idx\": Index tid=(26,7)\n> posting 1 and posting 2 (point to heap tid=(126,98) and tid=(126,99)) page\n> lsn=0/1B3D420.\n>\n> Caveat: if the first entry on the next index page has a key equal to the\n> key on a previous page AND all heap tid's corresponding to this entry are\n> invisible, currently cross-page check can not detect unique constraint\n> violation between previous index page entry and 2nd, 3d and next current\n> index page entries. In this case, there would be a message that recommends\n> doing VACUUM to remove the invisible entries from the index and repeat the\n> check. (Generally, it is recommended to do vacuum before the check, but for\n> the testing purpose I'd recommend turning it off to check the detection of\n> visible-invisible-visible duplicates scenarios)\n>\n> Your feedback is very much welcome, as usual.\n>\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>\n\nThere was typo, I mean, initially\"...index will NOT let the user know that\nit is corrupted...\"\n\nOn Mon, 8 Feb 2021, 14:46 Pavel Borisov <pashkin.elfe@gmail.com wrote:Hi, hackers!It seems that if btree index with a unique constraint is corrupted by duplicates, amcheck now can not catch this. Reindex becomes impossible as it throws an error but otherwise the index will let the user know that it is corrupted, and amcheck will tell that the index is clean. So I'd like to propose a short patch to improve amcheck for checking the unique constraint. It will output tid's of tuples that are duplicated in the index (i.e. more than one tid for the same index key is visible) and the user can easily investigate and delete corresponding table entries.0001 - is the actual patch, and 0002 - is a temporary hack for testing. It will allow inserting duplicates in a table even if an index with the exact name \"idx\" has a unique constraint (generally it is prohibited to insert). Then a new amcheck will tell us about these duplicates. It's pity but testing can not be done automatically, as it needs a core recompile. For testing I'd recommend a protocol similar to the following:- Apply patch 0002- Set autovaccum = off in postgresql.confcreate table tbl2 (a varchar(50), b varchar(150), c bytea, d varchar(50));create unique index idx on tbl2(a,b);insert into tbl2 select i::text::varchar, i::text::varchar, i::text::bytea, i::text::varchar from generate_series(0,3) as i, generate_series(0,10000) as x; So we'll have a generous amount of duplicates in the table and index. Then:create extension amcheck;select bt_index_check('idx', true);There will be output about each pair of duplicates: index tid's, position in a posting list (if the index item is deduplicated) and table tid's.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 218 and posting 219 (point to heap tid=(126,93) and tid=(126,94)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 219 and posting 220 (point to heap tid=(126,94) and tid=(126,95)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 220 and posting 221 (point to heap tid=(126,95) and tid=(126,96)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 221 and tid=(26,7) posting 0 (point to heap tid=(126,96) and tid=(126,97)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,7) posting 0 and posting 1 (point to heap tid=(126,97) and tid=(126,98)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,7) posting 1 and posting 2 (point to heap tid=(126,98) and tid=(126,99)) page lsn=0/1B3D420.Things to notice in the test output:- cross-page duplicates (when some of them on the one index page and the other are on the next). (Under patch 0002 they are marked by an additional message \"INFO:  cross page equal keys\" to catch them among the other)- If we delete table entries corresponding to some duplicates in between the other duplicates (vacuum should be off), the message for this entry should disappear but the other duplicates should be detected as previous.delete from tbl2 where ctid::text='(126,94)';select bt_index_check('idx', true);WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 218 and posting 220 (point to heap tid=(126,93) and tid=(126,95)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 220 and posting 221 (point to heap tid=(126,95) and tid=(126,96)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,6) posting 221 and tid=(26,7) posting 0 (point to heap tid=(126,96) and tid=(126,97)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,7) posting 0 and posting 1 (point to heap tid=(126,97) and tid=(126,98)) page lsn=0/1B3D420.WARNING:  index uniqueness is violated for index \"idx\": Index tid=(26,7) posting 1 and posting 2 (point to heap tid=(126,98) and tid=(126,99)) page lsn=0/1B3D420.Caveat: if the first entry on the next index page has a key equal to the key on a previous page AND all heap tid's corresponding to this entry are invisible, currently cross-page check can not detect unique constraint violation between previous index page entry and 2nd, 3d and next current index page entries. In this case, there would be a message that recommends doing VACUUM to remove the invisible entries from the index and repeat the check. (Generally, it is recommended to do vacuum before the check, but for the testing purpose I'd recommend turning it off to check the detection of visible-invisible-visible duplicates scenarios)Your feedback is very much welcome, as usual.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.comThere was typo, I mean, initially\"...index will NOT let the user know that it is corrupted...\"", "msg_date": "Mon, 8 Feb 2021 23:31:13 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Feb 8, 2021, at 2:46 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> 0002 - is a temporary hack for testing. It will allow inserting duplicates in a table even if an index with the exact name \"idx\" has a unique constraint (generally it is prohibited to insert). Then a new amcheck will tell us about these duplicates. It's pity but testing can not be done automatically, as it needs a core recompile. For testing I'd recommend a protocol similar to the following:\n> \n> - Apply patch 0002\n> - Set autovaccum = off in postgresql.conf\n\nThanks Pavel and Anastasia for working on this!\n\nUpdating pg_catalog directly is ugly, but the following seems a simpler way to set up a regression test than having to recompile. What do you think?\n\nCREATE TABLE junk (t text);\nCREATE UNIQUE INDEX junk_idx ON junk USING btree (t);\nINSERT INTO junk (t) VALUES ('fee'), ('fi'), ('fo'), ('fum');\nUPDATE pg_catalog.pg_index\n SET indisunique = false\n WHERE indrelid = (SELECT oid FROM pg_catalog.pg_class WHERE relname = 'junk');\nINSERT INTO junk (t) VALUES ('fee'), ('fi'), ('fo'), ('fum');\nUPDATE pg_catalog.pg_index\n SET indisunique = true\n WHERE indrelid = (SELECT oid FROM pg_catalog.pg_class WHERE relname = 'junk');\nSELECT * FROM junk;\n t\n-----\n fee\n fi\n fo\n fum\n fee\n fi\n fo\n fum\n(8 rows)\n\n\\d junk\n Table \"public.junk\"\n Column | Type | Collation | Nullable | Default\n--------+------+-----------+----------+---------\n t | text | | |\nIndexes:\n \"junk_idx\" UNIQUE, btree (t)\n\n\\d junk_idx\n Index \"public.junk_idx\"\n Column | Type | Key? | Definition\n--------+------+------+------------\n t | text | yes | t\nunique, btree, for table \"public.junk\"\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 8 Feb 2021 13:46:50 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "вт, 9 февр. 2021 г. в 01:46, Mark Dilger <mark.dilger@enterprisedb.com>:\n\n>\n>\n> > On Feb 8, 2021, at 2:46 AM, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >\n> > 0002 - is a temporary hack for testing. It will allow inserting\n> duplicates in a table even if an index with the exact name \"idx\" has a\n> unique constraint (generally it is prohibited to insert). Then a new\n> amcheck will tell us about these duplicates. It's pity but testing can not\n> be done automatically, as it needs a core recompile. For testing I'd\n> recommend a protocol similar to the following:\n> >\n> > - Apply patch 0002\n> > - Set autovaccum = off in postgresql.conf\n>\n> Thanks Pavel and Anastasia for working on this!\n>\n> Updating pg_catalog directly is ugly, but the following seems a simpler\n> way to set up a regression test than having to recompile. What do you\n> think?\n>\n> Very nice idea, thanks!\nI've made a regression test based on it. PFA v.2 of a patch. Now it doesn't\nneed anything external for testing.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Tue, 9 Feb 2021 22:43:50 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "To make tests stable I also removed lsn output under warning level. PFA v3\nof a patch\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Tue, 9 Feb 2021 22:57:34 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi,\nMinor comment:\n\n+ if (errflag == true)\n+ ereport(ERROR,\n\nI think 'if (errflag)' should suffice.\n\nCheers\n\nOn Tue, Feb 9, 2021 at 10:44 AM Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> вт, 9 февр. 2021 г. в 01:46, Mark Dilger <mark.dilger@enterprisedb.com>:\n>\n>>\n>>\n>> > On Feb 8, 2021, at 2:46 AM, Pavel Borisov <pashkin.elfe@gmail.com>\n>> wrote:\n>> >\n>> > 0002 - is a temporary hack for testing. It will allow inserting\n>> duplicates in a table even if an index with the exact name \"idx\" has a\n>> unique constraint (generally it is prohibited to insert). Then a new\n>> amcheck will tell us about these duplicates. It's pity but testing can not\n>> be done automatically, as it needs a core recompile. For testing I'd\n>> recommend a protocol similar to the following:\n>> >\n>> > - Apply patch 0002\n>> > - Set autovaccum = off in postgresql.conf\n>>\n>> Thanks Pavel and Anastasia for working on this!\n>>\n>> Updating pg_catalog directly is ugly, but the following seems a simpler\n>> way to set up a regression test than having to recompile. What do you\n>> think?\n>>\n>> Very nice idea, thanks!\n> I've made a regression test based on it. PFA v.2 of a patch. Now it\n> doesn't need anything external for testing.\n>\n> --\n> Best regards,\n> Pavel Borisov\n>\n> Postgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n>\n\nHi,Minor comment:+   if (errflag == true)+       ereport(ERROR,I think 'if (errflag)' should suffice.CheersOn Tue, Feb 9, 2021 at 10:44 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:вт, 9 февр. 2021 г. в 01:46, Mark Dilger <mark.dilger@enterprisedb.com>:\n\n> On Feb 8, 2021, at 2:46 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> 0002 - is a temporary hack for testing. It will allow inserting duplicates in a table even if an index with the exact name \"idx\" has a unique constraint (generally it is prohibited to insert). Then a new amcheck will tell us about these duplicates. It's pity but testing can not be done automatically, as it needs a core recompile. For testing I'd recommend a protocol similar to the following:\n> \n> - Apply patch 0002\n> - Set autovaccum = off in postgresql.conf\n\nThanks Pavel and Anastasia for working on this!\n\nUpdating pg_catalog directly is ugly, but the following seems a simpler way to set up a regression test than having to recompile.  What do you think?Very nice idea, thanks!I've made a regression test based on it. PFA v.2 of a patch. Now it doesn't need anything external for testing.--Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 9 Feb 2021 11:41:16 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Feb 9, 2021, at 10:43 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> вт, 9 февр. 2021 г. в 01:46, Mark Dilger <mark.dilger@enterprisedb.com>:\n> \n> \n> > On Feb 8, 2021, at 2:46 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > \n> > 0002 - is a temporary hack for testing. It will allow inserting duplicates in a table even if an index with the exact name \"idx\" has a unique constraint (generally it is prohibited to insert). Then a new amcheck will tell us about these duplicates. It's pity but testing can not be done automatically, as it needs a core recompile. For testing I'd recommend a protocol similar to the following:\n> > \n> > - Apply patch 0002\n> > - Set autovaccum = off in postgresql.conf\n> \n> Thanks Pavel and Anastasia for working on this!\n> \n> Updating pg_catalog directly is ugly, but the following seems a simpler way to set up a regression test than having to recompile. What do you think?\n> \n> Very nice idea, thanks!\n> I've made a regression test based on it. PFA v.2 of a patch. Now it doesn't need anything external for testing.\n\nIf bt_index_check() and bt_index_parent_check() are to have this functionality, shouldn't there be an option controlling it much as the option (heapallindexed boolean) controls checking whether all entries in the heap are indexed in the btree? It seems inconsistent to have an option to avoid checking the heap for that, but not for this. Alternately, this might even be better coded as its own function, named something like bt_unique_index_check() perhaps. I hope Peter might advise?\n\nThe regression test you provided is not portable. I am getting lots of errors due to differing output of the form \"page lsn=0/4DAD7E0\". You might turn this into a TAP test and use a regular expression to check the output.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 11:21:59 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> The regression test you provided is not portable. I am getting lots of\n> errors due to differing output of the form \"page lsn=0/4DAD7E0\". You might\n> turn this into a TAP test and use a regular expression to check the output.\n>\nMay I ask you to ensure you used v3 of a patch to check? I've made tests\nportable in v3, probably, you've checked not the last version.\n\nThanks for your attention to the patch\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nThe regression test you provided is not portable.  I am getting lots of errors due to differing output of the form \"page lsn=0/4DAD7E0\".  You might turn this into a TAP test and use a regular expression to check the output.May I ask you to ensure you used v3 of a patch to check? I've made tests portable in v3, probably, you've checked not the last version.Thanks for your attention to the patch-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 2 Mar 2021 00:05:57 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Mar 1, 2021, at 12:05 PM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> The regression test you provided is not portable. I am getting lots of errors due to differing output of the form \"page lsn=0/4DAD7E0\". You might turn this into a TAP test and use a regular expression to check the output.\n> May I ask you to ensure you used v3 of a patch to check? I've made tests portable in v3, probably, you've checked not the last version.\n\nYes, my review was of v2. Updating to v3, I see that the test passes on my laptop. It still looks brittle to have all the tid values in the test output, but it does pass.\n\n> Thanks for your attention to the patch\n\nThanks for the patch!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 12:20:31 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> If bt_index_check() and bt_index_parent_check() are to have this\n> functionality, shouldn't there be an option controlling it much as the\n> option (heapallindexed boolean) controls checking whether all entries in\n> the heap are indexed in the btree? It seems inconsistent to have an option\n> to avoid checking the heap for that, but not for this. Alternately, this\n> might even be better coded as its own function, named something like\n> bt_unique_index_check() perhaps. I hope Peter might advise?\n>\n\nAs for heap checking, my reasoning was that we can not check whether a\nunique constraint violated by the index, without checking heap tuple\nvisibility. I.e. we can have many identical index entries without\nuniqueness violated if only one of them corresponds to a visible heap\ntuple. So heap checking included in my patch is _necessary_ for unique\nconstraint checking, it should not have an option to be disabled,\notherwise, the only answer we can get is that unique constraint MAY be\nviolated and may not be, which is quite useless. If you meant something\ndifferent, please elaborate.\n\nI can try to rewrite unique constraint checking to be done after all others\ncheck but I suppose it's the performance considerations are that made\nprevious amcheck routines to do many checks simultaneously. I tried to\nstick to this practice. It's also not so elegant to duplicate much code to\nmake uniqueness checks independently and the resulting patch will be much\nbigger and harder to review.\n\nAnyway, your and Peter's further considerations are always welcome.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nIf bt_index_check() and bt_index_parent_check() are to have this functionality, shouldn't there be an option controlling it much as the option (heapallindexed boolean) controls checking whether all entries in the heap are indexed in the btree?  It seems inconsistent to have an option to avoid checking the heap for that, but not for this.  Alternately, this might even be better coded as its own function, named something like bt_unique_index_check() perhaps.  I hope Peter might advise? As for heap checking, my reasoning was that we can not check whether a unique constraint violated by the index, without checking heap tuple visibility. I.e. we can have many identical index entries without uniqueness violated if only one of them corresponds to a visible heap tuple. So heap checking included in my patch is _necessary_ for unique constraint checking, it should not have an option to be disabled, otherwise, the only answer we can get is that unique constraint MAY be violated and may not be, which is quite useless. If you meant something different, please elaborate. I can try to rewrite unique constraint checking to be done after all others check but I suppose it's the performance considerations are that made previous amcheck routines to do many checks simultaneously. I tried to stick to this practice. It's also not so elegant to duplicate much code to make uniqueness checks independently and the resulting patch will be much bigger and harder to review.Anyway, your and Peter's further considerations are always welcome.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Tue, 2 Mar 2021 00:23:23 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Yes, my review was of v2. Updating to v3, I see that the test passes on my laptop. It still looks brittle to have all the tid values in the test output, but it does pass.\n\nHm, anyone tried it on 32-bit hardware?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Mar 2021 15:23:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Mar 1, 2021, at 12:23 PM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> If bt_index_check() and bt_index_parent_check() are to have this functionality, shouldn't there be an option controlling it much as the option (heapallindexed boolean) controls checking whether all entries in the heap are indexed in the btree? It seems inconsistent to have an option to avoid checking the heap for that, but not for this. Alternately, this might even be better coded as its own function, named something like bt_unique_index_check() perhaps. I hope Peter might advise?\n> \n> As for heap checking, my reasoning was that we can not check whether a unique constraint violated by the index, without checking heap tuple visibility. I.e. we can have many identical index entries without uniqueness violated if only one of them corresponds to a visible heap tuple. So heap checking included in my patch is _necessary_ for unique constraint checking, it should not have an option to be disabled, otherwise, the only answer we can get is that unique constraint MAY be violated and may not be, which is quite useless. If you meant something different, please elaborate. \n\nI completely agree that checking uniqueness requires looking at the heap, but I don't agree that every caller of bt_index_check on an index wants that particular check to be performed. There are multiple ways in which an index might be corrupt, and Peter wrote the code to only check some of them by default, with options to expand the checks to other things. This is why heapallindexed is optional. If you don't want to pay the price of checking all entries in the heap against the btree, you don't have to.\n\nI'm not against running uniqueness checks on unique indexes. It seems fairly normal that a user would want that. Perhaps the option should default to 'true' if unspecified? But having no option at all seems to run contrary to how the other functionality is structured.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 1 Mar 2021 13:05:03 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> I completely agree that checking uniqueness requires looking at the heap,\n> but I don't agree that every caller of bt_index_check on an index wants\n> that particular check to be performed. There are multiple ways in which an\n> index might be corrupt, and Peter wrote the code to only check some of them\n> by default, with options to expand the checks to other things. This is why\n> heapallindexed is optional. If you don't want to pay the price of checking\n> all entries in the heap against the btree, you don't have to.\n>\n\nI've got the idea and revised the patch accordingly. Thanks!\nPfa v4 of a patch. I've added an optional argument to allow uniqueness\nchecks for the unique indexes.\nAlso, I added a test variant to make them work on 32-bit systems.\nUnfortunately, converting the regression test to TAP would be a pain for\nme. Hope it can be used now as a 2-variant regression test for 32 and 64\nbit systems.\n\nThank you for your consideration!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Tue, 2 Mar 2021 18:08:43 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, Mar 1, 2021 at 11:22 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> If bt_index_check() and bt_index_parent_check() are to have this functionality, shouldn't there be an option controlling it much as the option (heapallindexed boolean) controls checking whether all entries in the heap are indexed in the btree? It seems inconsistent to have an option to avoid checking the heap for that, but not for this.\n\nI agree. Actually, it should probably use the same snapshot as the\nheapallindexed=true case. So either only perform unique constraint\nverification when that option is used, or invent a new option that\nwill still share the snapshot used by heapallindexed=true (when the\noptions are combined).\n\n> The regression test you provided is not portable. I am getting lots of errors due to differing output of the form \"page lsn=0/4DAD7E0\". You might turn this into a TAP test and use a regular expression to check the output.\n\nI would test this using a custom opclass that does simple fault\ninjection. For example, an opclass that indexes integers, but can be\nconfigured to dynamically make 0 values equal or unequal to each\nother. That's more representative of real-world problems.\n\nYou \"break the warranty\" by updating pg_index, even compared to\nupdating other system catalogs. In particular, you break the\n\"indcheckxmin wait -- wait for xmin to be old before using index\"\nstuff in get_relation_info(). So it seems worse than updating\npg_attribute, for example (which is something that the tests do\nalready).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Mar 2021 18:54:59 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, Feb 8, 2021 at 2:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> Caveat: if the first entry on the next index page has a key equal to the key on a previous page AND all heap tid's corresponding to this entry are invisible, currently cross-page check can not detect unique constraint violation between previous index page entry and 2nd, 3d and next current index page entries. In this case, there would be a message that recommends doing VACUUM to remove the invisible entries from the index and repeat the check. (Generally, it is recommended to do vacuum before the check, but for the testing purpose I'd recommend turning it off to check the detection of visible-invisible-visible duplicates scenarios)\n\nIt's rather unlikely that equal values in a unique index will end up\non different leaf pages. It's really rare, in fact. This following\ncomment block from nbtinsert.c (which appears right before we call\n_bt_check_unique()) explains why this is:\n\n* It might be necessary to check a page to the right in _bt_check_unique,\n* though that should be very rare. In practice the first page the value ...\n\nYou're going to have to \"couple\" buffer locks in the style of\n_bt_check_unique() (as well as keeping a buffer lock on \"the first\nleaf page a duplicate might be on\" throughout) if you need the test to\nwork reliably. But why bother with that? The tool doesn't have to be\n100% perfect at detecting corruption (nothing can be), and it's rather\nunlikely that it will matter for this test. A simple test that doesn't\nhandle cross-page duplicates is still going to be very effective.\n\nI don't think that it's acceptable for your new check to raise a\nWARNING instead of an ERROR. I especially don't like that the new\nunique checking functionality merely warns that the index *might* be\ncorrupt. False positives are always unacceptable within amcheck, and I\nthink that this is a false positive.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 Mar 2021 19:10:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> You're going to have to \"couple\" buffer locks in the style of\n> _bt_check_unique() (as well as keeping a buffer lock on \"the first\n> leaf page a duplicate might be on\" throughout) if you need the test to\n> work reliably. But why bother with that? The tool doesn't have to be\n> 100% perfect at detecting corruption (nothing can be), and it's rather\n> unlikely that it will matter for this test. A simple test that doesn't\n> handle cross-page duplicates is still going to be very effective.\n>\n\nIndeed at first, I did the test which doesn't bother checking duplicates\ncross-page which I considered very rare, but then a customer sent me his\ncorrupted index where I found this rare thing which was not detectable by\namcheck and he was puzzled with the issue. Even rare inconsistencies can\nappear when people handle huge amounts of data. So I did an update that\nhandles a wider class of errors. I don't suppose that cross page unique\ncheck is expensive as it uses same things that are already used in amcheck\nfor cross-page checks.\n\nIs it suitable if I omit suspected duplicates message in the very-very rare\ncase amcheck can not detect but leave cross-page checks?\n\n>>I don't think that it's acceptable for your new check to raise a\n>>WARNING instead of an ERROR.\nIt is not instead of an ERROR. If at least one violation is detected,\namcheck will output the final ERROR message. The purpose is not to stop\nchecking at the first violation. But I can make them reported in a current\namcheck style if it is necessary.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nYou're going to have to \"couple\" buffer locks in the style of\n_bt_check_unique() (as well as keeping a buffer lock on \"the first\nleaf page a duplicate might be on\" throughout) if you need the test to\nwork reliably. But why bother with that? The tool doesn't have to be\n100% perfect at detecting corruption (nothing can be), and it's rather\nunlikely that it will matter for this test. A simple test that doesn't\nhandle cross-page duplicates is still going to be very effective. Indeed at first, I did the test which doesn't bother checking duplicates cross-page which I considered very rare, but then a customer sent me his corrupted index where I found this rare thing which was not detectable by amcheck and he was puzzled with the issue. Even rare inconsistencies can appear when people handle huge amounts of data. So I did an update that handles a wider class of errors. I don't suppose that cross page unique check is expensive as it uses same things that are already used in amcheck for cross-page checks.Is it suitable if I omit suspected duplicates message in the very-very rare case amcheck can not detect but leave cross-page checks?>>I don't think that it's acceptable for your new check to raise a>>WARNING instead of an ERROR.It is not instead of an ERROR. If at least one violation is detected, amcheck will output the final ERROR message. The purpose is not to stop checking at the first violation. But I can make them reported in a current amcheck style if it is necessary.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 3 Mar 2021 12:08:00 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Mar 2, 2021, at 6:08 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> I completely agree that checking uniqueness requires looking at the heap, but I don't agree that every caller of bt_index_check on an index wants that particular check to be performed. There are multiple ways in which an index might be corrupt, and Peter wrote the code to only check some of them by default, with options to expand the checks to other things. This is why heapallindexed is optional. If you don't want to pay the price of checking all entries in the heap against the btree, you don't have to.\n> \n> I've got the idea and revised the patch accordingly. Thanks!\n> Pfa v4 of a patch. I've added an optional argument to allow uniqueness checks for the unique indexes.\n> Also, I added a test variant to make them work on 32-bit systems. Unfortunately, converting the regression test to TAP would be a pain for me. Hope it can be used now as a 2-variant regression test for 32 and 64 bit systems.\n> \n> Thank you for your consideration!\n> \n> -- \n> Best regards,\n> Pavel Borisov\n> \n> Postgres Professional: http://postgrespro.com\n> <v4-0001-Make-amcheck-checking-UNIQUE-constraint-for-btree.patch>\n\nLooking over v4, here are my review comments...\n\nI created the file contrib/amcheck/amcheck--1.2--1.3.sql during the v14 development cycle, so that is not released yet. If your patch goes out in v14, does it need to create an amcheck--1.3--1.4.sql, or could you fit your changes into the 1.2--1.3.sql file? (Does the project have a convention governing this?) This is purely a question. I'm not advising you to change anything here.\n\nYou need to update doc/src/sgml/amcheck.sgml to reflect the changes you made to the bt_index_check and bt_index_parent_check functions.\n\nYou need to update the recently committed src/bin/pg_amcheck project to include --checkunique as an option. This client application already has flags for heapallindexed and rootdescend. I can help with that if it isn't obvious what needs to be done. Note that pg_amcheck/t contains TAP tests that exercise the options, so you'll need to extend code coverage to include this new option.\n\n\n> On Mar 2, 2021, at 7:10 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I don't think that it's acceptable for your new check to raise a\n> WARNING instead of an ERROR.\n\nYou already responded to Peter, and I can see that after raising WARNINGs about an index, the code raises an ERROR. That is different from behavior that pg_amcheck currently expects from contrib/amcheck functions. It will be interesting to see if that makes integration harder.\n\n\n> On Mar 2, 2021, at 6:54 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> The regression test you provided is not portable. I am getting lots of errors due to differing output of the form \"page lsn=0/4DAD7E0\". You might turn this into a TAP test and use a regular expression to check the output.\n> \n> I would test this using a custom opclass that does simple fault\n> injection. For example, an opclass that indexes integers, but can be\n> configured to dynamically make 0 values equal or unequal to each\n> other. That's more representative of real-world problems.\n> \n> You \"break the warranty\" by updating pg_index, even compared to\n> updating other system catalogs. In particular, you break the\n> \"indcheckxmin wait -- wait for xmin to be old before using index\"\n> stuff in get_relation_info(). So it seems worse than updating\n> pg_attribute, for example (which is something that the tests do\n> already).\n\nTake a look at src/bin/pg_amcheck/t/005_opclass_damage.pl for an example of changing an opclass to test btree index breakage.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 08:11:29 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On 3/15/21 11:11 AM, Mark Dilger wrote:\n> \n>> On Mar 2, 2021, at 6:08 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>>\n>> I completely agree that checking uniqueness requires looking at the heap, but I don't agree that every caller of bt_index_check on an index wants that particular check to be performed. There are multiple ways in which an index might be corrupt, and Peter wrote the code to only check some of them by default, with options to expand the checks to other things. This is why heapallindexed is optional. If you don't want to pay the price of checking all entries in the heap against the btree, you don't have to.\n>>\n>> I've got the idea and revised the patch accordingly. Thanks!\n>> Pfa v4 of a patch. I've added an optional argument to allow uniqueness checks for the unique indexes.\n>> Also, I added a test variant to make them work on 32-bit systems. Unfortunately, converting the regression test to TAP would be a pain for me. Hope it can be used now as a 2-variant regression test for 32 and 64 bit systems.\n>>\n>> Thank you for your consideration!\n>>\n>> -- \n>> Best regards,\n>> Pavel Borisov\n>>\n>> Postgres Professional: http://postgrespro.com\n>> <v4-0001-Make-amcheck-checking-UNIQUE-constraint-for-btree.patch>\n> \n> Looking over v4, here are my review comments...\n\nThis patch appears to need some work and has not been updated in several \nweeks, so marking Returned with Feedback.\n\nPlease submit to the next CF when you have a new patch.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n", "msg_date": "Thu, 8 Apr 2021 10:36:36 -0400", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> >> I completely agree that checking uniqueness requires looking at the\n> heap, but I don't agree that every caller of bt_index_check on an index\n> wants that particular check to be performed. There are multiple ways in\n> which an index might be corrupt, and Peter wrote the code to only check\n> some of them by default, with options to expand the checks to other\n> things. This is why heapallindexed is optional. If you don't want to pay\n> the price of checking all entries in the heap against the btree, you don't\n> have to.\n> >>\n> >> I've got the idea and revised the patch accordingly. Thanks!\n> >> Pfa v4 of a patch. I've added an optional argument to allow uniqueness\n> checks for the unique indexes.\n> >> Also, I added a test variant to make them work on 32-bit systems.\n> Unfortunately, converting the regression test to TAP would be a pain for\n> me. Hope it can be used now as a 2-variant regression test for 32 and 64\n> bit systems.\n> >>\n> >> Thank you for your consideration!\n> >>\n> >> --\n> >> Best regards,\n> >> Pavel Borisov\n> >>\n> >> Postgres Professional: http://postgrespro.com\n> >> <v4-0001-Make-amcheck-checking-UNIQUE-constraint-for-btree.patch>\n> >\n> > Looking over v4, here are my review comments...\n>\n\nMark and Peter, big thanks for your ideas!\n\nI had little time to work on this feature until recently, but finally, I've\nelaborated v5 patch (PFA)\nIt contains the following improvements, most of which are based on your\nconsideration:\n\n- Amcheck tests are reworked into TAP-tests with \"break the warranty\" by\ncomparison function changes in the opclass instead of pg_index update.\nMark, again thanks for the sample!\n- Added new --checkunique option into pg_amcheck.\n- Added documentation and tests for amcheck and pg_amcheck new functions.\n- Results are output at ERROR log level like it is done in the other\namcheck tests.\n- Rare case of inability to check due to the first entry on a leaf page\nbeing both: (1) equal to the last one on the previous page and (2) deleted\nin the heap, is demoted to DEBUG1 log level. In this, I folowed Peter's\nconsideration that amcheck should do its best to check, but can not always\nverify everything. The case is expected to be extremely rare.\n- Made pages connectivity based on btpo_next (corrected a bug in the code,\nI found meanwhile)\n- If snapshot is already taken in heapallindexed case, reuse it for unique\nconstraint check.\n\nThe patch is pgindented and rebased on the current PG master code.\nI'd like to re-attach the patch v5 to the upcoming CF if you don't mind.\n\nI also want to add that some customers face index uniqueness\nviolations more often than I've expected, and I hope this check could also\nhelp some other PostgreSQL customers.\n\nYour further considerations are welcome as always!\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Mon, 20 Dec 2021 19:37:35 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Dec 20, 2021, at 7:37 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> The patch is pgindented and rebased on the current PG master code.\n\nThank you, Pavel.\n\n\nThe tests in check_btree.sql no longer create a bttest_unique table, so the DROP TABLE is surplusage:\n\n+DROP TABLE bttest_unique;\n+ERROR: table \"bttest_unique\" does not exist\n\n\nThe changes in pg_amcheck.c to pass the new checkunique parameter will likely need to be based on a amcheck version check. The implementation of prepare_btree_command() in pg_amcheck.c should be kept compatible with older versions of amcheck, because it connects to remote servers and you can't know in advance that the remote servers are as up-to-date as the machine where pg_amcheck is installed. I'm thinking specifically about this change:\n\n@@ -871,7 +877,8 @@ prepare_btree_command(PQExpBuffer sql, RelationInfo *rel, PGconn *conn)\n if (opts.parent_check)\n appendPQExpBuffer(sql,\n \"SELECT %s.bt_index_parent_check(\"\n- \"index := c.oid, heapallindexed := %s, rootdescend := %s)\"\n+ \"index := c.oid, heapallindexed := %s, rootdescend := %s, \"\n+ \"checkunique := %s)\"\n \"\\nFROM pg_catalog.pg_class c, pg_catalog.pg_index i \"\n \"WHERE c.oid = %u \"\n \"AND c.oid = i.indexrelid \"\n\nIf the user calls pg_amcheck with --checkunique, and one or more remote servers have an amcheck version < 1.4, at a minimum you'll need to avoid calling bt_index_parent_check with that parameter, and probably also you'll either need to raise a warning or perhaps an error telling the user that such a check cannot be performed.\n\n\nYou've forgotten to include contrib/amcheck/amcheck--1.3--1.4.sql in the v5 patch, resulting in a failed install:\n\n/usr/bin/install -c -m 644 ./amcheck--1.3--1.4.sql ./amcheck--1.2--1.3.sql ./amcheck--1.1--1.2.sql ./amcheck--1.0--1.1.sql ./amcheck--1.0.sql '/Users/mark.dilger/hydra/unique_review.5/tmp_install/Users/mark.dilger/pgtest/test_install/share/postgresql/extension/'\ninstall: ./amcheck--1.3--1.4.sql: No such file or directory\nmake[2]: *** [install] Error 71\nmake[1]: *** [checkprep] Error 2\n\nUsing the one from the v4 patch fixes the problem. Please include this file in v6, to simplify the review process.\n\n\nThe changes to t/005_opclass_damage.pl look ok. The creation of a new table for the new test seems unnecessary, but only problematic in that it makes the test slightly longer to read. I recommend changing the test to use the same table that the prior test uses, but that is just a recommendation, not a requirement.\n\nYou should add coverage for --checkunique to t/003_check.pl.\n\nYou should add coverage for multiple PostgreSQL::Test::Cluster instances running differing versions of amcheck, perhaps some on version 1.3 and some on version 1.4. Then test that the --checkunique option works adequately.\n\n\nI have not reviewed the changes to verify_nbtree.c. I'll leave that to Peter.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 20 Dec 2021 10:02:18 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> The tests in check_btree.sql no longer create a bttest_unique table, so\n> the DROP TABLE is surplusage:\n>\n> +DROP TABLE bttest_unique;\n> +ERROR: table \"bttest_unique\" does not exist\n>\n>\n> The changes in pg_amcheck.c to pass the new checkunique parameter will\n> likely need to be based on a amcheck version check. The implementation of\n> prepare_btree_command() in pg_amcheck.c should be kept compatible with\n> older versions of amcheck, because it connects to remote servers and you\n> can't know in advance that the remote servers are as up-to-date as the\n> machine where pg_amcheck is installed. I'm thinking specifically about\n> this change:\n>\n> @@ -871,7 +877,8 @@ prepare_btree_command(PQExpBuffer sql, RelationInfo\n> *rel, PGconn *conn)\n> if (opts.parent_check)\n> appendPQExpBuffer(sql,\n> \"SELECT %s.bt_index_parent_check(\"\n> - \"index := c.oid, heapallindexed := %s,\n> rootdescend := %s)\"\n> + \"index := c.oid, heapallindexed := %s,\n> rootdescend := %s, \"\n> + \"checkunique := %s)\"\n> \"\\nFROM pg_catalog.pg_class c,\n> pg_catalog.pg_index i \"\n> \"WHERE c.oid = %u \"\n> \"AND c.oid = i.indexrelid \"\n>\n> If the user calls pg_amcheck with --checkunique, and one or more remote\n> servers have an amcheck version < 1.4, at a minimum you'll need to avoid\n> calling bt_index_parent_check with that parameter, and probably also you'll\n> either need to raise a warning or perhaps an error telling the user that\n> such a check cannot be performed.\n>\n>\n> You've forgotten to include contrib/amcheck/amcheck--1.3--1.4.sql in the\n> v5 patch, resulting in a failed install:\n>\n> /usr/bin/install -c -m 644 ./amcheck--1.3--1.4.sql ./amcheck--1.2--1.3.sql\n> ./amcheck--1.1--1.2.sql ./amcheck--1.0--1.1.sql ./amcheck--1.0.sql\n> '/Users/mark.dilger/hydra/unique_review.5/tmp_install/Users/mark.dilger/pgtest/test_install/share/postgresql/extension/'\n> install: ./amcheck--1.3--1.4.sql: No such file or directory\n> make[2]: *** [install] Error 71\n> make[1]: *** [checkprep] Error 2\n>\n> Using the one from the v4 patch fixes the problem. Please include this\n> file in v6, to simplify the review process.\n>\n>\n> The changes to t/005_opclass_damage.pl look ok. The creation of a new\n> table for the new test seems unnecessary, but only problematic in that it\n> makes the test slightly longer to read. I recommend changing the test to\n> use the same table that the prior test uses, but that is just a\n> recommendation, not a requirement.\n>\n> You should add coverage for --checkunique to t/003_check.pl.\n>\n> You should add coverage for multiple PostgreSQL::Test::Cluster instances\n> running differing versions of amcheck, perhaps some on version 1.3 and some\n> on version 1.4. Then test that the --checkunique option works adequately.\n>\n\nThank you, Mark!\n\nIn v6 (PFA) I've made the changes on your advice i.e.\n\n- pg_amcheck with --checkunique option will ignore uniqueness check (with a\nwarning) if amcheck version in a db is <1.4 and doesn't support the feature.\n- fixed unnecessary drop table in regression\n- use the existing table for uniqueness check in 005_opclass_damage.pl\n- added tests into 003_check.pl . It is only smoke test that just verifies\nnew functions.\n- added test contrib/amcheck/t/004_verify_nbtree_unique.pl it is more\nextensive test based on opclass damage which was intended to be main test\nfor amcheck, but which I've forgotten to add to commit in v5.\n005_opclass_damage.pl test, which you've seen in v5 is largely based on\nfirst part of 004_verify_nbtree_unique.pl (with the later calling\npg_amcheck, and the former calling bt_index_check(),\nbt_index_parent_check() )\n- added forgotten upgrade script amcheck--1.3--1.4.sql (from v4)\n\nYou are welcome with more considerations.\n\n--\nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 22 Dec 2021 12:01:24 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On Dec 22, 2021, at 12:01 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Thank you, Mark!\n> \n> In v6 (PFA) I've made the changes on your advice i.e.\n> \n> - pg_amcheck with --checkunique option will ignore uniqueness check (with a warning) if amcheck version in a db is <1.4 and doesn't support the feature.\n\nOk.\n\n+ int vmaj = 0,\n+ vmin = 0,\n+ vrev = 0;\n+ const char *amcheck_version = pstrdup(PQgetvalue(result, 0, 1));\n+\n+ sscanf(amcheck_version, \"%d.%d.%d\", &vmaj, &vmin, &vrev);\n\nThe pstrdup is unnecessary but harmless.\n\n> - fixed unnecessary drop table in regression\n\nOk.\n\n> - use the existing table for uniqueness check in 005_opclass_damage.pl\n\nIt appears you still create a new table, bttest_unique, rather than using the existing table int4tbl. That's fine.\n\n> - added tests into 003_check.pl . It is only smoke test that just verifies new functions.\n\n+\n+$node->command_checks_all(\n+ [\n+ @cmd, '-s', 's1', '-i', 't1_btree', '--parent-check',\n+ '--checkunique', 'db1'\n+ ],\n+ 2,\n+ [$index_missing_relation_fork_re],\n+ [$no_output_re],\n+ 'pg_amcheck smoke test --parent-check');\n+\n+$node->command_checks_all(\n+ [\n+ @cmd, '-s', 's1', '-i', 't1_btree', '--heapallindexed',\n+ '--rootdescend', '--checkunique', 'db1'\n+ ],\n+ 2,\n+ [$index_missing_relation_fork_re],\n+ [$no_output_re],\n+ 'pg_amcheck smoke test --heapallindexed --rootdescend');\n+\n+$node->command_checks_all(\n+ [ @cmd, '--checkunique', '-d', 'db1', '-d', 'db2', '-d', 'db3', '-S', 's*' ],\n+ 0, [$no_output_re], [$no_output_re],\n+ 'pg_amcheck excluding all corrupt schemas');\n+\n\nYou have borrowed the existing tests but forgot to change their names. (The last line of each check is the test name, such as 'pg_amcheck smoke test --parent-check'.) Please make each test name unique.\n\n> - added test contrib/amcheck/t/004_verify_nbtree_unique.pl it is more extensive test based on opclass damage which was intended to be main test for amcheck, but which I've forgotten to add to commit in v5.\n> 005_opclass_damage.pl test, which you've seen in v5 is largely based on first part of 004_verify_nbtree_unique.pl (with the later calling pg_amcheck, and the former calling bt_index_check(), bt_index_parent_check() )\n\nOk.\n\n> - added forgotten upgrade script amcheck--1.3--1.4.sql (from v4)\n\nOk.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 23 Dec 2021 08:31:29 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> The pstrdup is unnecessary but harmless.\n>\n> > - use the existing table for uniqueness check in 005_opclass_damage.pl\n>\n> It appears you still create a new table, bttest_unique, rather than using\n> the existing table int4tbl. That's fine.\n>\n> > - added tests into 003_check.pl . It is only smoke test that just\n> verifies new functions.\n>\n> You have borrowed the existing tests but forgot to change their names.\n> (The last line of each check is the test name, such as 'pg_amcheck smoke\n> test --parent-check'.) Please make each test name unique.\n>\n\nThanks for your review! Fixed all these remaining things from patch v6.\nPFA v7 patch.\n\n---\nBest regards,\nMaxim Orlov.", "msg_date": "Thu, 23 Dec 2021 21:05:47 +0300", "msg_from": "=?UTF-8?B?0JzQsNC60YHQuNC8INCe0YDQu9C+0LI=?= <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi,\n\nOn Fri, Dec 24, 2021 at 2:06 AM Максим Орлов <orlovmg@gmail.com> wrote:\n>\n> Thanks for your review! Fixed all these remaining things from patch v6.\n> PFA v7 patch.\n\nThe cfbot reports that you have mixed declarations and code\n(https://cirrus-ci.com/task/6407449413419008):\n\n[17:21:26.926] pg_amcheck.c: In function ‘main’:\n[17:21:26.926] pg_amcheck.c:634:4: error: ISO C90 forbids mixed\ndeclarations and code [-Werror=declaration-after-statement]\n[17:21:26.926] 634 | int vmaj = 0,\n[17:21:26.926] | ^~~\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:47:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> The cfbot reports that you have mixed declarations and code\n> (https://cirrus-ci.com/task/6407449413419008):\n>\n> [17:21:26.926] pg_amcheck.c: In function ‘main’:\n> [17:21:26.926] pg_amcheck.c:634:4: error: ISO C90 forbids mixed\n> declarations and code [-Werror=declaration-after-statement]\n> [17:21:26.926] 634 | int vmaj = 0,\n> [17:21:26.926] | ^~~\n>\n\nCorrected this, thanks!\nAlso added more comments on this part of the code.\nPFA v8 of a patch\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 12 Jan 2022 11:58:24 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "By the way I've forgotten to add one part of my code into the CF patch\nrelated to the treatment of NULL values in checking btree unique\nconstraints.\nPFA v9 of a patch with this minor code and tests additions.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Thu, 13 Jan 2022 13:53:23 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "I've updated the patch due to recent changes by Daniel Gustafsson\n(549ec201d6132b7).\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Mon, 21 Feb 2022 17:14:00 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "This patch was broken by d16773cdc86210493a2874cb0cf93f3883fcda73 \"Add\nmacros in hash and btree AMs to get the special area of their pages\"\n\nIf it's really just a few macros it should be easy enough to merge but\nit would be good to do a rebase given the number of other commits\nsince February anyways.\n\nOn Mon, 21 Feb 2022 at 09:14, Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> I've updated the patch due to recent changes by Daniel Gustafsson (549ec201d6132b7).\n>\n> --\n> Best regards,\n> Maxim Orlov.\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Sat, 2 Apr 2022 20:02:07 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": ">\n> This patch was broken by d16773cdc86210493a2874cb0cf93f3883fcda73 \"Add\n> macros in hash and btree AMs to get the special area of their pages\"\n>\n> If it's really just a few macros it should be easy enough to merge but\n> it would be good to do a rebase given the number of other commits\n> since February anyways.\n>\n\nRebased, thanks!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Mon, 4 Apr 2022 13:18:08 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "v11 patch do not apply due to recent code changes.\nRebased. PFA v12.\n\nPlease feel free to check and discuss it.\n\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Wed, 11 May 2022 17:04:36 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "CFbot says v12 patch does not apply.\nRebased. PFA v13.\nYour reviews are very much welcome!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>", "msg_date": "Fri, 20 May 2022 17:46:38 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi Pavel,\n\n> Rebased. PFA v13.\n> Your reviews are very much welcome!\n\nI noticed that this patch is in \"Needs Review\" state and it has been\nstuck for some time now, so I decided to take a look.\n\n```\n+SELECT bt_index_parent_check('bttest_a_idx', true, true, true);\n+SELECT bt_index_parent_check('bttest_b_idx', true, false, true);\n``\n\n1. This \"true, false, true\" sequence is difficult to read. I suggest\nwe use named arguments here.\n\n2. I believe there are some minor issues with the comments. E.g. instead of:\n\n- First key on next page is same\n- Make values 768 and 769 looks equal\n\nI would write:\n\n- The first key on the next page is the same\n- Make values 768 and 769 look equal\n\nThere are many little errors like these.\n\n```\n+# Copyright (c) 2021, PostgreSQL Global Development Group\n```\n\n3. Oh no. The copyright has expired!\n\n```\n+ <literal>true</literal>. When <parameter>checkunique</parameter>\n+ is <literal>true</literal> <function>bt_index_check</function> will\n```\n\n4. This piece of documentation was copy-pasted between two functions\nwithout change of the function name.\n\nOther than that to me the patch looks in pretty good shape. Here is\nv14 where I fixed my own nitpicks, with the permission of Pavel given\nofflist.\n\nIf no one sees any other defects I'm going to change the status of the\npatch to \"Ready to Committer\" in a short time.\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 20 Jul 2022 17:15:34 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi!\n\nI would make two cosmetic changes.\n\n1. I suggest replace description of function bt_report_duplicate() from\n```\n/*\n * Prepare and print an error message for unique constrain violation in\n * a btree index under WARNING level. Also set a flag to report ERROR\n * at the end of the check.\n */\n```\nto\n```\n/*\n * Prepare an error message for unique constrain violation in\n * a btree index and report ERROR.\n */\n```\n\n2. I think will be better to change test 004_verify_nbtree_unique.pl - \nreplace\n```\nuse Test::More tests => 6;\n```\nto\n```\nuse Test::More;\n...\ndone_testing();\n```\n(same as in the other three tests).\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 7 Sep 2022 16:44:16 +0300", "msg_from": "Dmitry Koval <d.koval@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi,\n\nI also would like to suggest a cosmetic change.\nIn v15 a new field checkunique is added after heapallindexed and before\nno_btree_expansion fields in struct definition, but in initialisation it is\nadded after no_btree_expansion:\n\n--- a/src/bin/pg_amcheck/pg_amcheck.c\n+++ b/src/bin/pg_amcheck/pg_amcheck.c\n@@ -102,6 +102,7 @@ typedef struct AmcheckOptions\n bool parent_check;\n bool rootdescend;\n bool heapallindexed;\n+ bool checkunique;\n\n /* heap and btree hybrid option */\n bool no_btree_expansion;\n@@ -132,7 +133,8 @@ static AmcheckOptions opts = {\n .parent_check = false,\n .rootdescend = false,\n .heapallindexed = false,\n- .no_btree_expansion = false\n+ .no_btree_expansion = false,\n+ .checkunique = false\n };\n\nI suggest to add checkunique field between heapallindexed and\nno_btree_expansion fields in initialisation as well as in definition:\n\n@@ -132,6 +133,7 @@ static AmcheckOptions opts = {\n .parent_check = false,\n .rootdescend = false,\n .heapallindexed = false,\n+ .checkunique = false,\n .no_btree_expansion = false\n };\n\n--\nBest regards,\nLitskevich Karina\nPostgres Professional: http://postgrespro.com/", "msg_date": "Thu, 8 Sep 2022 16:29:16 +0300", "msg_from": "Karina Litskevich <litskevichkarina@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi,\n\nOn 2022-05-20 17:46:38 +0400, Pavel Borisov wrote:\n> CFbot says v12 patch does not apply.\n> Rebased. PFA v13.\n> Your reviews are very much welcome!\n\nDue to the merge of the meson based build this patch needs to be\nadjusted: https://cirrus-ci.com/build/6350479973154816\n\nLooks like you need to add amcheck--1.3--1.4.sql to the list of files to be\ninstalled and t/004_verify_nbtree_unique.pl to the tests.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 08:13:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Thu, 22 Sept 2022 at 18:13, Andres Freund <andres@anarazel.de> wrote:\n\n> Due to the merge of the meson based build this patch needs to be\n> adjusted: https://cirrus-ci.com/build/6350479973154816\n>\n> Looks like you need to add amcheck--1.3--1.4.sql to the list of files to be\n> installed and t/004_verify_nbtree_unique.pl to the tests.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThanks! Fixed.\n\n-- \nBest regards,\nMaxim Orlov.", "msg_date": "Tue, 27 Sep 2022 11:04:00 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi!\n\nI think, this patch was marked as \"Waiting on Author\", probably, by\nmistake. Since recent changes were done without any significant code\nchanges and CF bot how happy again.\n\nI'm going to move it to RfC, could I? If not, please tell why.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I think, this patch was marked as \"Waiting on Author\", probably, by mistake. Since recent changes were done without any significant code changes and CF bot how happy again.I'm going to move it to RfC, could I? If not, please tell why.-- Best regards,Maxim Orlov.", "msg_date": "Wed, 28 Sep 2022 11:36:40 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi hackers,\n\n> I think, this patch was marked as \"Waiting on Author\", probably, by mistake. Since recent changes were done without any significant code changes and CF bot how happy again.\n>\n> I'm going to move it to RfC, could I? If not, please tell why.\n\nI restored the \"Ready for Committer\" state. I don't think it's a good\npractice to change the state every time the patch has a slight\nconflict or something. This is not helpful at all. Such things happen\nquite regularly and typically are fixed in a couple of days.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 28 Sep 2022 11:43:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Wed, Sep 28, 2022 at 11:44 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > I think, this patch was marked as \"Waiting on Author\", probably, by mistake. Since recent changes were done without any significant code changes and CF bot how happy again.\n> >\n> > I'm going to move it to RfC, could I? If not, please tell why.\n>\n> I restored the \"Ready for Committer\" state. I don't think it's a good\n> practice to change the state every time the patch has a slight\n> conflict or something. This is not helpful at all. Such things happen\n> quite regularly and typically are fixed in a couple of days.\n\nThis patch seems useful to me. I went through the thread, it seems\nthat all the critics are addressed.\n\nI've rebased this patch. Also, I've run perltidy for tests, split\nlong errmsg() into errmsg(), errdetail() and errhint(), and do other\nminor enchantments.\n\nI think this patch is ready to go. I'm going to push it if there are\nno objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Tue, 24 Oct 2023 23:13:01 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Alexander!\n\n\nOn Wed, 25 Oct 2023 at 00:13, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Wed, Sep 28, 2022 at 11:44 AM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > > I think, this patch was marked as \"Waiting on Author\", probably, by mistake. Since recent changes were done without any significant code changes and CF bot how happy again.\n> > >\n> > > I'm going to move it to RfC, could I? If not, please tell why.\n> >\n> > I restored the \"Ready for Committer\" state. I don't think it's a good\n> > practice to change the state every time the patch has a slight\n> > conflict or something. This is not helpful at all. Such things happen\n> > quite regularly and typically are fixed in a couple of days.\n>\n> This patch seems useful to me. I went through the thread, it seems\n> that all the critics are addressed.\n>\n> I've rebased this patch. Also, I've run perltidy for tests, split\n> long errmsg() into errmsg(), errdetail() and errhint(), and do other\n> minor enchantments.\n>\n> I think this patch is ready to go. I'm going to push it if there are\n> no objections.\n>\n> ------\n> Regards,\n> Alexander Korotkov\n\nIt's very good that this long-standing patch is finally committed. Thanks a lot!\n\nRegards,\nPavel Borisov\n\n\n", "msg_date": "Mon, 30 Oct 2023 11:29:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, Oct 30, 2023 at 11:29:04AM +0400, Pavel Borisov wrote:\n> On Wed, 25 Oct 2023 at 00:13, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > I think this patch is ready to go. I'm going to push it if there are\n> > no objections.\n\n> It's very good that this long-standing patch is finally committed. Thanks a lot!\n\nAgreed. I gave this feature (commit 5ae2087) a try. Thanks for implementing\nit. Could I get your input on two topics?\n\n\n==== 1. Cross-page comparison at \"first key on the next page\" only\n\nCross-page comparisons got this discussion upthread:\n\nOn Tue, Mar 02, 2021 at 07:10:32PM -0800, Peter Geoghegan wrote:\n> On Mon, Feb 8, 2021 at 2:46 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > Caveat: if the first entry on the next index page has a key equal to the key on a previous page AND all heap tid's corresponding to this entry are invisible, currently cross-page check can not detect unique constraint violation between previous index page entry and 2nd, 3d and next current index page entries. In this case, there would be a message that recommends doing VACUUM to remove the invisible entries from the index and repeat the check. (Generally, it is recommended to do vacuum before the check, but for the testing purpose I'd recommend turning it off to check the detection of visible-invisible-visible duplicates scenarios)\n\n> You're going to have to \"couple\" buffer locks in the style of\n> _bt_check_unique() (as well as keeping a buffer lock on \"the first\n> leaf page a duplicate might be on\" throughout) if you need the test to\n> work reliably.\n\nThe amcheck feature has no lock coupling at its \"first key on the next page\"\ncheck. I think that's fine, because amcheck takes one snapshot at the\nbeginning and looks for pairs of visible-to-that-snapshot heap tuples with the\nsame scan key. _bt_check_unique(), unlike amcheck, must catch concurrent\ninserts. If amcheck \"checkunique\" wanted to detect duplicates that would\nappear when all transactions commit, it would need lock coupling. (I'm not\nsuggesting it do that.) Do you see a problem with the lack of lock coupling\nat \"first key on the next page\"?\n\n> But why bother with that? The tool doesn't have to be\n> 100% perfect at detecting corruption (nothing can be), and it's rather\n> unlikely that it will matter for this test. A simple test that doesn't\n> handle cross-page duplicates is still going to be very effective.\n\nI agree, but perhaps the \"first key on the next page\" code is more complex\nthan general-case code would be. If the lack of lock coupling is fine, then I\nthink memory context lifecycle is the only obstacle making index page\nboundaries special. Are there factors beyond that? We already have\nstate->lowkey kept across pages via MemoryContextAlloc(). Similar lines of\ncode could preserve the scan key for checkunique, making the \"first key on the\nnext page\" code unnecessary.\n\n\n==== 2. Raises runtime by 476% despite no dead tuples\n\nI used the following to create a table larger than RAM, 17GB table and 10GB\nindex on a system with 12GB RAM:\n\n\\set count 500000000\nbegin;\nset maintenance_work_mem = '1GB';\nset client_min_messages = debug1; -- debug2 is per-block spam\ncreate temp table t as select n from generate_series(1,:count) t(n);\ncreate unique index t_idx on t(n);\n\\dt+ t\n\\di+ t_idx\ncreate extension amcheck;\nselect bt_index_check('t_idx', heapallindexed => false, checkunique => false);\nselect bt_index_check('t_idx', heapallindexed => false, checkunique => true);\n\nAdding checkunique raised runtime from 58s to 276s, because it checks\nvisibility for every heap tuple. It could do the heap fetch and visibility\ncheck lazily, when the index yields two heap TIDs for one scan key. That\nshould give zero visibility checks for this particular test case, and it\ndoesn't add visibility checks to bloated-table cases. Pseudo-code:\n\n\t/*---\n\t * scan_key is the last uniqueness-relevant scan key observed as\n\t * bt_check_level_from_leftmost() moves right to traverse the leaf level.\n\t * Will be NULL if the next tuple can't be the second tuple of a\n\t * uniqueness violation, because any of the following apply:\n\t * - we're evaluating the first leaf tuple of the entire index\n\t * - last scan key had anynullkeys (never forms a uniqueness violation w/\n\t * any other scan key)\n\t */\n\tscan_key = NULL;\n\t/*\n\t * scan_key_known_visible==true indicates that scan_key_heap_tid is the\n\t * last _visible_ heap TID observed for scan_key. Otherwise,\n\t * scan_key_heap_tid is the last heap TID observed for scan_key, and we've\n\t * not yet checked its visibility.\n\t */\n\tbool scan_key_known_visible;\n\tscan_key_heap_tid;\n\tforeach itup (leftmost_leaf_level_tup .. rightmost_leaf_level_tup) {\n\t\tif (itup.anynullkeys)\n\t\t\tscan_key = NULL;\n\t\telse if (scan_key != NULL &&\n\t\t\t\t _bt_compare(scan_key, itup.key) == 0 &&\n\t\t\t\t (scan_key_known_visible ||\n\t\t\t\t (scan_key_known_visible = visible(scan_key_heap_tid))))\n\t\t{\n\t\t\tif (visible(itup.tid))\n\t\t\t\telog(ERROR, \"duplicate in unique index\");\n\t\t}\n\t\telse\n\t\t{\n\t\t\t/*\n\t\t\t * No prior uniqueness-relevant key, or key changed, or we just\n\t\t\t * learned scan_key_heap_tid was invisible. Make itup the\n\t\t\t * standard by which we judge future index tuples as we move\n\t\t\t * right.\n\t\t\t */\n\t\t\tscan_key = itup.key;\n\t\t\tscan_key_known_visible = false;\n\t\t\tscan_key_heap_tid = itup.tid;\n\t\t}\n\t}\n\n\n", "msg_date": "Sun, 24 Mar 2024 19:03:23 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Sun, Mar 24, 2024 at 10:03 PM Noah Misch <noah@leadboat.com> wrote:\n> > You're going to have to \"couple\" buffer locks in the style of\n> > _bt_check_unique() (as well as keeping a buffer lock on \"the first\n> > leaf page a duplicate might be on\" throughout) if you need the test to\n> > work reliably.\n>\n> The amcheck feature has no lock coupling at its \"first key on the next page\"\n> check. I think that's fine, because amcheck takes one snapshot at the\n> beginning and looks for pairs of visible-to-that-snapshot heap tuples with the\n> same scan key. _bt_check_unique(), unlike amcheck, must catch concurrent\n> inserts. If amcheck \"checkunique\" wanted to detect duplicates that would\n> appear when all transactions commit, it would need lock coupling. (I'm not\n> suggesting it do that.) Do you see a problem with the lack of lock coupling\n> at \"first key on the next page\"?\n\nPractically speaking, no, I see no problems.\n\n> I agree, but perhaps the \"first key on the next page\" code is more complex\n> than general-case code would be. If the lack of lock coupling is fine, then I\n> think memory context lifecycle is the only obstacle making index page\n> boundaries special. Are there factors beyond that?\n\nI believe that my concern back in 2021 was that the general complexity\nof cross-page checking was unlikely to be worth it. Note that\nnbtsplitloc.c is *maximally* aggressive about avoiding split points\nthat fall within some group of duplicates, so with a unique index it\nshould be very rare.\n\nAdmittedly, I was probably thinking about the complexity of adding a\nbunch of code just to be able to check uniqueness across page\nboundaries. I did mention lock coupling by name, but that was more of\na catch-all term for the problems in this area.\n\n> We already have\n> state->lowkey kept across pages via MemoryContextAlloc(). Similar lines of\n> code could preserve the scan key for checkunique, making the \"first key on the\n> next page\" code unnecessary.\n\nI suspect that I was overly focussed on the index structure itself\nback when I made these remarks. I might not have considered that just\nusing an MVCC snapshot for the TIDs makes the whole process safe,\nthough that now seems quite obvious.\n\nSeparately, I now see that the committed patch just reuses the code\nthat has long been used to check that things are in the correct order\nacross page boundaries: this is the bt_right_page_check_scankey check,\nwhich existed in the very earliest versions of amcheck. So while I\nagree that we could just keep the original scan key (from the last\nitem on every leaf page), and then make the check at the start of the\nnext page instead (as opposed to making it at the end of the previous\nleaf page, which is how it works now), it's not obvious that that\nwould be a good trade-off, all things considered.\n\nIt might still be a little better that way around, overall, but you're\nnot just talking about changing the recently committed checkunique\npatch (I think). You're also talking about restructuring the long\nestablished bt_right_page_check_scankey check (otherwise, what's the\npoint?). I'm not categorically opposed to that, but it's not as if\nit'll allow you to throw out a bunch of code -- AFAICT that proposal\ndoesn't have that clear advantage going for it. The race condition\nthat is described at great length in bt_right_page_check_scankey isn't\never going to be a problem for the recently committed checkunique\npatch (as you more or less pointed out yourself), but obviously it is\nstill a concern for the cross-page order check.\n\nIn summary, the old bt_right_page_check_scankey check is strictly\nconcerned with the consistency of a physical data structure (the index\nitself), whereas the new checkunique check makes sure that the logical\ncontent of the database is consistent (the index, the heap, and all\nassociated transaction status metadata have to be consistent). That\nmeans that the concerns that are described at length in\nbt_right_page_check_scankey (nor anything like those concerns) don't\napply to the new checkunique check. We agree on all that, I think. But\nit's less clear that that presents us with an opportunity to simplify\nthis patch.\n\n> Adding checkunique raised runtime from 58s to 276s, because it checks\n> visibility for every heap tuple. It could do the heap fetch and visibility\n> check lazily, when the index yields two heap TIDs for one scan key. That\n> should give zero visibility checks for this particular test case, and it\n> doesn't add visibility checks to bloated-table cases.\n\nThe added runtime that you report seems quite excessive to me. I'm\nreally surprised that the code doesn't manage to avoid visibility\nchecks in the absence of duplicates that might both have TIDs\nconsidered visible. Lazy visibility checking seems almost essential,\nand not just a nice-to-have optimization.\n\nIt seems like the implication of everything that you said about\nrefactoring/moving the check was that doing so would enable this\noptimization (at least an implementation along the lines of your\npseudo code). If that was what you intended, then it's not obvious to\nme why it is relevant. What, if anything, does it have to do with\nmaking the new checkunique visibility checks happen lazily?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 25 Mar 2024 12:03:10 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, Mar 25, 2024 at 12:03:10PM -0400, Peter Geoghegan wrote:\n> On Sun, Mar 24, 2024 at 10:03 PM Noah Misch <noah@leadboat.com> wrote:\n\n> Separately, I now see that the committed patch just reuses the code\n> that has long been used to check that things are in the correct order\n> across page boundaries: this is the bt_right_page_check_scankey check,\n> which existed in the very earliest versions of amcheck. So while I\n> agree that we could just keep the original scan key (from the last\n> item on every leaf page), and then make the check at the start of the\n> next page instead (as opposed to making it at the end of the previous\n> leaf page, which is how it works now), it's not obvious that that\n> would be a good trade-off, all things considered.\n> \n> It might still be a little better that way around, overall, but you're\n> not just talking about changing the recently committed checkunique\n> patch (I think). You're also talking about restructuring the long\n> established bt_right_page_check_scankey check (otherwise, what's the\n> point?). I'm not categorically opposed to that, but it's not as if\n\nI wasn't thinking about changing the pre-v17 bt_right_page_check_scankey()\ncode. I got interested in this area when I saw the interaction of the new\n\"first key on the next page\" logic with bt_right_page_check_scankey(). The\npatch made bt_right_page_check_scankey() pass back rightfirstoffset. The new\ncode then does palloc_btree_page() and PageGetItem() with that offset, which\nbt_right_page_check_scankey() had already done. That smelled like a misplaced\ndistribution of responsibility. For a time, I suspected the new code should\nmove down into bt_right_page_check_scankey(). Then I transitioned to thinking\ncheckunique didn't need new code for the page boundary.\n\n> it'll allow you to throw out a bunch of code -- AFAICT that proposal\n> doesn't have that clear advantage going for it. The race condition\n> that is described at great length in bt_right_page_check_scankey isn't\n> ever going to be a problem for the recently committed checkunique\n> patch (as you more or less pointed out yourself), but obviously it is\n> still a concern for the cross-page order check.\n> \n> In summary, the old bt_right_page_check_scankey check is strictly\n> concerned with the consistency of a physical data structure (the index\n> itself), whereas the new checkunique check makes sure that the logical\n> content of the database is consistent (the index, the heap, and all\n> associated transaction status metadata have to be consistent). That\n> means that the concerns that are described at length in\n> bt_right_page_check_scankey (nor anything like those concerns) don't\n> apply to the new checkunique check. We agree on all that, I think. But\n> it's less clear that that presents us with an opportunity to simplify\n> this patch.\n\nSee above for why I anticipated a simplification opportunity with respect to\nnew-in-v17 code. Still, it may not pan out.\n\n> > Adding checkunique raised runtime from 58s to 276s, because it checks\n\nSide note: my last email incorrectly described that as \"raises runtime by\n476%\". It should have said \"by 376%\" or \"by a factor of 4.76\".\n\n> > visibility for every heap tuple. It could do the heap fetch and visibility\n> > check lazily, when the index yields two heap TIDs for one scan key. That\n> > should give zero visibility checks for this particular test case, and it\n> > doesn't add visibility checks to bloated-table cases.\n\n> It seems like the implication of everything that you said about\n> refactoring/moving the check was that doing so would enable this\n> optimization (at least an implementation along the lines of your\n> pseudo code). If that was what you intended, then it's not obvious to\n> me why it is relevant. What, if anything, does it have to do with\n> making the new checkunique visibility checks happen lazily?\n\nTheir connection is just being the two big-picture topics I found in\npost-commit review. Decisions about the cross-page check are indeed separable\nfrom decisions about lazy vs. eager visibility checks.\n\nThanks,\nnm\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:24:43 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, Mar 25, 2024 at 2:24 PM Noah Misch <noah@leadboat.com> wrote:\n> I wasn't thinking about changing the pre-v17 bt_right_page_check_scankey()\n> code. I got interested in this area when I saw the interaction of the new\n> \"first key on the next page\" logic with bt_right_page_check_scankey(). The\n> patch made bt_right_page_check_scankey() pass back rightfirstoffset. The new\n> code then does palloc_btree_page() and PageGetItem() with that offset, which\n> bt_right_page_check_scankey() had already done. That smelled like a misplaced\n> distribution of responsibility. For a time, I suspected the new code should\n> move down into bt_right_page_check_scankey(). Then I transitioned to thinking\n> checkunique didn't need new code for the page boundary.\n\nAh, I see. Somehow I missed this point when I recently took a fresh\nlook at the committed patch.\n\n I did notice (I meant to point out) that I have concerns about this\npart of the new uniqueness check code:\n\n\"\nif (P_IGNORE(topaque) || !P_ISLEAF(topaque))\n break;\n\"\n\nMy concern here is with the !P_ISLEAF(topaque) test -- it shouldn't be\nrequired. If the page in question isn't a leaf page, then the index\nmust be corrupt (or the page deletion recycle safety/drain technique\nthing is buggy). The \" !P_ISLEAF(topaque)\" part of the check is either\nsuperfluous or something that ought to be reported as corruption --\nit's not a legal/expected state.\n\nSeparately, I dislike the way the target block changes within\nbt_target_page_check(). The general idea behind verify_nbtree.c's\ntarget block is that every block becomes the target exactly once, in a\nclearly defined place. All corruption (in the index structure itself)\nis formally considered to be a problem with that particular target\nblock. I want to be able to clearly distinguish between the target and\ntarget's right sibling here, to explain my concerns, but they're kinda\nboth the target, so that's a lot harder than it should be. (Admittedly\ndirectly blaming the target block has always been a little bit\narbitrary, at least in certain cases, but even there it provides\nstructure that makes things much easier to describe unambiguously.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 29 Mar 2024 14:17:08 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, Mar 29, 2024 at 02:17:08PM -0400, Peter Geoghegan wrote:\n> On Mon, Mar 25, 2024 at 2:24 PM Noah Misch <noah@leadboat.com> wrote:\n> > I wasn't thinking about changing the pre-v17 bt_right_page_check_scankey()\n> > code. I got interested in this area when I saw the interaction of the new\n> > \"first key on the next page\" logic with bt_right_page_check_scankey(). The\n> > patch made bt_right_page_check_scankey() pass back rightfirstoffset. The new\n> > code then does palloc_btree_page() and PageGetItem() with that offset, which\n> > bt_right_page_check_scankey() had already done. That smelled like a misplaced\n> > distribution of responsibility. For a time, I suspected the new code should\n> > move down into bt_right_page_check_scankey(). Then I transitioned to thinking\n> > checkunique didn't need new code for the page boundary.\n\n> I did notice (I meant to point out) that I have concerns about this\n> part of the new uniqueness check code:\n> \n> \"\n> if (P_IGNORE(topaque) || !P_ISLEAF(topaque))\n> break;\n> \"\n> \n> My concern here is with the !P_ISLEAF(topaque) test -- it shouldn't be\n> required. If the page in question isn't a leaf page, then the index\n> must be corrupt (or the page deletion recycle safety/drain technique\n> thing is buggy). The \" !P_ISLEAF(topaque)\" part of the check is either\n> superfluous or something that ought to be reported as corruption --\n> it's not a legal/expected state.\n\nGood point.\n\n> Separately, I dislike the way the target block changes within\n> bt_target_page_check(). The general idea behind verify_nbtree.c's\n> target block is that every block becomes the target exactly once, in a\n> clearly defined place.\n\nAgreed.\n\n\n", "msg_date": "Fri, 29 Mar 2024 16:47:52 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On 24.10.23 22:13, Alexander Korotkov wrote:\n> On Wed, Sep 28, 2022 at 11:44 AM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n>>> I think, this patch was marked as \"Waiting on Author\", probably, by mistake. Since recent changes were done without any significant code changes and CF bot how happy again.\n>>>\n>>> I'm going to move it to RfC, could I? If not, please tell why.\n>>\n>> I restored the \"Ready for Committer\" state. I don't think it's a good\n>> practice to change the state every time the patch has a slight\n>> conflict or something. This is not helpful at all. Such things happen\n>> quite regularly and typically are fixed in a couple of days.\n> \n> This patch seems useful to me. I went through the thread, it seems\n> that all the critics are addressed.\n> \n> I've rebased this patch. Also, I've run perltidy for tests, split\n> long errmsg() into errmsg(), errdetail() and errhint(), and do other\n> minor enchantments.\n> \n> I think this patch is ready to go. I'm going to push it if there are\n> no objections.\n\nI just found the new pg_amcheck option --checkunique in PG17-to-be. \nCould we rename this to --check-unique? Seems friendlier. Maybe also \nrename the bt_index_check function argument to check_unique.\n\n\n\n", "msg_date": "Wed, 17 Apr 2024 08:38:48 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "I did notice (I meant to point out) that I have concerns about this\n> part of the new uniqueness check code:\n> \"\n> if (P_IGNORE(topaque) || !P_ISLEAF(topaque))\n> break;\n> \"\n\nMy concern here is with the !P_ISLEAF(topaque) test -- it shouldn't be\n> required\n\nI agree. But I didn't see the need to check uniqueness constraints\nviolations in internal pages. Furthermore, it doesn't mean only a violation\nof constraint, but a major index corruption. I agree that checking and\nreporting this type of corruption separately is a possible thing.\n\n\n\nSeparately, I dislike the way the target block changes within\n> bt_target_page_check(). The general idea behind verify_nbtree.c's\n> target block is that every block becomes the target exactly once, in a\n> clearly defined place. All corruption (in the index structure itself)\n> is formally considered to be a problem with that particular target\n> block. I want to be able to clearly distinguish between the target and\n> target's right sibling here, to explain my concerns, but they're kinda\n> both the target, so that's a lot harder than it should be. (Admittedly\n> directly blaming the target block has always been a little bit\n> arbitrary, at least in certain cases, but even there it provides\n> structure that makes things much easier to describe unambiguously.)\n>\n\nThe possible way to load the target block only once is to get rid of the\ncross-page uniqueness violation check. I introduced it to catch more\npossible cases of uniqueness violations. Though they are expected to be\nextremely rare, and anyway the algorithm doesn't get any warranty, just\ndoes its best to catch what is possible. I don't object to this change.\n\nRegards,\nPavel.\n\n I did notice (I meant to point out) that I have concerns about thispart of the new uniqueness check code:\"if (P_IGNORE(topaque) || !P_ISLEAF(topaque))    break;\"My concern here is with the !P_ISLEAF(topaque) test -- it shouldn't berequiredI agree. But I didn't see the need to check uniqueness constraints violations in internal pages. Furthermore, it doesn't mean only a violation of constraint, but a major index corruption. I agree that checking and reporting this type of corruption separately is a possible thing. Separately, I dislike the way the target block changes within\nbt_target_page_check(). The general idea behind verify_nbtree.c's\ntarget block is that every block becomes the target exactly once, in a\nclearly defined place. All corruption (in the index structure itself)\nis formally considered to be a problem with that particular target\nblock. I want to be able to clearly distinguish between the target and\ntarget's right sibling here, to explain my concerns, but they're kinda\nboth the target, so that's a lot harder than it should be. (Admittedly\ndirectly blaming the target block has always been a little bit\narbitrary, at least in certain cases, but even there it provides\nstructure that makes things much easier to describe unambiguously.) The possible way to load the target block only once is to get rid of the cross-page uniqueness violation check. I introduced it to catch more possible cases of uniqueness violations. Though they are expected to be extremely rare, and anyway the algorithm doesn't get any warranty, just does its best to catch what is possible. I don't object to this change.Regards,Pavel.", "msg_date": "Wed, 17 Apr 2024 19:41:10 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Wed, Apr 17, 2024 at 6:41 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> I did notice (I meant to point out) that I have concerns about this\n>> part of the new uniqueness check code:\n>> \"\n>> if (P_IGNORE(topaque) || !P_ISLEAF(topaque))\n>> break;\n>> \"\n>>\n>> My concern here is with the !P_ISLEAF(topaque) test -- it shouldn't be\n>> required\n>\n> I agree. But I didn't see the need to check uniqueness constraints violations in internal pages. Furthermore, it doesn't mean only a violation of constraint, but a major index corruption. I agree that checking and reporting this type of corruption separately is a possible thing.\n\nI think we could just throw an error in case of an unexpected internal\npage. It doesn't seem reasonable to continue the check with this type\nof corruption detected. If the tree linkage is corrupted we may enter\nan endless loop or something.\n\n>> Separately, I dislike the way the target block changes within\n>> bt_target_page_check(). The general idea behind verify_nbtree.c's\n>> target block is that every block becomes the target exactly once, in a\n>> clearly defined place. All corruption (in the index structure itself)\n>> is formally considered to be a problem with that particular target\n>> block. I want to be able to clearly distinguish between the target and\n>> target's right sibling here, to explain my concerns, but they're kinda\n>> both the target, so that's a lot harder than it should be. (Admittedly\n>> directly blaming the target block has always been a little bit\n>> arbitrary, at least in certain cases, but even there it provides\n>> structure that makes things much easier to describe unambiguously.)\n>\n> The possible way to load the target block only once is to get rid of the cross-page uniqueness violation check. I introduced it to catch more possible cases of uniqueness violations. Though they are expected to be extremely rare, and anyway the algorithm doesn't get any warranty, just does its best to catch what is possible. I don't object to this change.\n\nI think we could probably just avoid setting state->target during\ncross-page check. Just save that into a local variable and pass as an\nargument where needed.\n\nSkipping the visibility checks for \"only one tuple for scan key\" case\nlooks like very valuable optimization [1].\n\nI also think we should wrap lVis_* variables into struct. That would\nmake the way we pass them to functions more elegant.\n\nLinks.\n1. https://www.postgresql.org/message-id/20240325020323.fd.nmisch%40google.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:57:24 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Wed, Apr 17, 2024 at 9:38 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 24.10.23 22:13, Alexander Korotkov wrote:\n> > On Wed, Sep 28, 2022 at 11:44 AM Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n> >>> I think, this patch was marked as \"Waiting on Author\", probably, by mistake. Since recent changes were done without any significant code changes and CF bot how happy again.\n> >>>\n> >>> I'm going to move it to RfC, could I? If not, please tell why.\n> >>\n> >> I restored the \"Ready for Committer\" state. I don't think it's a good\n> >> practice to change the state every time the patch has a slight\n> >> conflict or something. This is not helpful at all. Such things happen\n> >> quite regularly and typically are fixed in a couple of days.\n> >\n> > This patch seems useful to me. I went through the thread, it seems\n> > that all the critics are addressed.\n> >\n> > I've rebased this patch. Also, I've run perltidy for tests, split\n> > long errmsg() into errmsg(), errdetail() and errhint(), and do other\n> > minor enchantments.\n> >\n> > I think this patch is ready to go. I'm going to push it if there are\n> > no objections.\n>\n> I just found the new pg_amcheck option --checkunique in PG17-to-be.\n> Could we rename this to --check-unique? Seems friendlier. Maybe also\n> rename the bt_index_check function argument to check_unique.\n\n+1 from me\nLet's do so if nobody objects.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 24 Apr 2024 12:58:11 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, hackers!\n\nOn Wed, 24 Apr 2024 at 13:58, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Wed, Apr 17, 2024 at 9:38 AM Peter Eisentraut <peter@eisentraut.org>\n> wrote:\n> > On 24.10.23 22:13, Alexander Korotkov wrote:\n> > > On Wed, Sep 28, 2022 at 11:44 AM Aleksander Alekseev\n> > > <aleksander@timescale.com> wrote:\n> > >>> I think, this patch was marked as \"Waiting on Author\", probably, by\n> mistake. Since recent changes were done without any significant code\n> changes and CF bot how happy again.\n> > >>>\n> > >>> I'm going to move it to RfC, could I? If not, please tell why.\n> > >>\n> > >> I restored the \"Ready for Committer\" state. I don't think it's a good\n> > >> practice to change the state every time the patch has a slight\n> > >> conflict or something. This is not helpful at all. Such things happen\n> > >> quite regularly and typically are fixed in a couple of days.\n> > >\n> > > This patch seems useful to me. I went through the thread, it seems\n> > > that all the critics are addressed.\n> > >\n> > > I've rebased this patch. Also, I've run perltidy for tests, split\n> > > long errmsg() into errmsg(), errdetail() and errhint(), and do other\n> > > minor enchantments.\n> > >\n> > > I think this patch is ready to go. I'm going to push it if there are\n> > > no objections.\n> >\n> > I just found the new pg_amcheck option --checkunique in PG17-to-be.\n> > Could we rename this to --check-unique? Seems friendlier. Maybe also\n> > rename the bt_index_check function argument to check_unique.\n>\n> +1 from me\n> Let's do so if nobody objects.\n>\n\nThank you very much for your input in this thread!\n\nSee the patches based on the proposals in the attachment:\n\n0001: Optimize speed by avoiding heap visibility checking for different\nnon-deduplicated index tuples as proposed by Noah Misch\n\nSpeed measurements on my laptop using the exact method recommended by Noah\nupthread:\nCurrent master branch: checkunique off: 144s, checkunique on: 419s\nWith patch 0001: checkunique off: 141s, checkunique on: 171s\n\n0002: Use structure to store and transfer info about last visible heap\nentry (code refactoring) as proposed by Alexander Korotkov\n\n0003: Don't load rightpage into BtreeCheckState (code refactoring) as\nproposed by Peter Geoghegan\n\nLoading of right page for cross-page unique constraint check in the same\nway as in bt_right_page_check_scankey()\n\n0004: Report error when next page to a leaf is not a leaf as proposed by\nPeter Geoghegan\n\nI think it's a very improbable condition and this check might be not\nnecessary, but it's right and safe to break check and report error.\n\n0005: Rename checkunique parameter to more user friendly as proposed by\nPeter Eisentraut and Alexander Korotkov\n\nAgain many thanks for the useful proposals!\n\nRegards,\nPavel Borisov,\nSupabase", "msg_date": "Thu, 25 Apr 2024 16:59:54 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, hackers!\n\nOn Thu, Apr 25, 2024 at 4:00 PM Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> 0005: Rename checkunique parameter to more user friendly as proposed by\n> Peter Eisentraut and Alexander Korotkov\n>\n\nI'm not sure renaming checkunique is a good idea. Other arguments of\nbt_index_check and bt_index_parent_check functions (heapallindexed and\nrootdescend) don't have underscore character in them. Corresponding\npg_amcheck options (--heapallindexed and --rootdescend) are also written\nin one piece. check_unique and --check-unique stand out. Making arguments\nand options in different styles doesn't seem user friendly to me.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nHi, hackers!On Thu, Apr 25, 2024 at 4:00 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:0005: Rename checkunique parameter to more user friendly as proposed by Peter Eisentraut and Alexander KorotkovI'm not sure renaming checkunique is a good idea. Other arguments ofbt_index_check and bt_index_parent_check functions (heapallindexed androotdescend) don't have underscore character in them. Correspondingpg_amcheck options (--heapallindexed and --rootdescend) are also writtenin one piece. check_unique and --check-unique stand out. Making argumentsand options in different styles doesn't seem user friendly to me.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/", "msg_date": "Thu, 25 Apr 2024 16:44:19 +0300", "msg_from": "Karina Litskevich <litskevichkarina@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Karina!\n\nOn Thu, 25 Apr 2024 at 17:44, Karina Litskevich <litskevichkarina@gmail.com>\nwrote:\n\n> Hi, hackers!\n>\n> On Thu, Apr 25, 2024 at 4:00 PM Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n>\n>> 0005: Rename checkunique parameter to more user friendly as proposed by\n>> Peter Eisentraut and Alexander Korotkov\n>>\n>\n> I'm not sure renaming checkunique is a good idea. Other arguments of\n> bt_index_check and bt_index_parent_check functions (heapallindexed and\n> rootdescend) don't have underscore character in them. Corresponding\n> pg_amcheck options (--heapallindexed and --rootdescend) are also written\n> in one piece. check_unique and --check-unique stand out. Making arguments\n> and options in different styles doesn't seem user friendly to me.\n>\n\nI did it under the consensus of Peter Eisentraut and Alexander Korotkov.\nThe pro for renaming is more user-friendly naming, I also agree.\nThe cons is that we already have both styles: \"non-user friendly\"\nheapallindexed and rootdescend and \"user-friendly\" parent-check.\n\nI'm ready to go with consensus in this matter. It's also not yet too late\nto make it unique-check (instead of check-unique) to be better in style\nwith parent-check.\n\nKind regards,\nPavel\n\nHi, Karina!On Thu, 25 Apr 2024 at 17:44, Karina Litskevich <litskevichkarina@gmail.com> wrote:Hi, hackers!On Thu, Apr 25, 2024 at 4:00 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:0005: Rename checkunique parameter to more user friendly as proposed by Peter Eisentraut and Alexander KorotkovI'm not sure renaming checkunique is a good idea. Other arguments ofbt_index_check and bt_index_parent_check functions (heapallindexed androotdescend) don't have underscore character in them. Correspondingpg_amcheck options (--heapallindexed and --rootdescend) are also writtenin one piece. check_unique and --check-unique stand out. Making argumentsand options in different styles doesn't seem user friendly to me.I did it under the consensus of Peter Eisentraut and Alexander Korotkov. The pro for renaming is more user-friendly naming, I also agree. The cons is that we already have both styles: \"non-user friendly\" heapallindexed and rootdescend and \"user-friendly\" parent-check.I'm ready to go with consensus in this matter. It's also not yet too late to make it unique-check (instead of check-unique) to be better in style with parent-check.Kind regards,Pavel", "msg_date": "Thu, 25 Apr 2024 17:54:39 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Thu, Apr 25, 2024 at 04:59:54PM +0400, Pavel Borisov wrote:\n> 0001: Optimize speed by avoiding heap visibility checking for different\n> non-deduplicated index tuples as proposed by Noah Misch\n> \n> Speed measurements on my laptop using the exact method recommended by Noah\n> upthread:\n> Current master branch: checkunique off: 144s, checkunique on: 419s\n> With patch 0001: checkunique off: 141s, checkunique on: 171s\n\nWhere is the CPU time going to make it still be 21% slower w/ checkunique on?\nIt's a great improvement vs. current master, but I don't have an obvious\nexplanation for the remaining +21%.\n\n\n", "msg_date": "Tue, 30 Apr 2024 19:24:12 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi Noah,\n\nOn Wed, May 1, 2024 at 5:24 AM Noah Misch <noah@leadboat.com> wrote:\n> On Thu, Apr 25, 2024 at 04:59:54PM +0400, Pavel Borisov wrote:\n> > 0001: Optimize speed by avoiding heap visibility checking for different\n> > non-deduplicated index tuples as proposed by Noah Misch\n> >\n> > Speed measurements on my laptop using the exact method recommended by Noah\n> > upthread:\n> > Current master branch: checkunique off: 144s, checkunique on: 419s\n> > With patch 0001: checkunique off: 141s, checkunique on: 171s\n>\n> Where is the CPU time going to make it still be 21% slower w/ checkunique on?\n> It's a great improvement vs. current master, but I don't have an obvious\n> explanation for the remaining +21%.\n\nI think there is at least extra index tuples comparison.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 1 May 2024 05:26:13 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Wed, May 1, 2024 at 5:26 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Wed, May 1, 2024 at 5:24 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Thu, Apr 25, 2024 at 04:59:54PM +0400, Pavel Borisov wrote:\n> > > 0001: Optimize speed by avoiding heap visibility checking for different\n> > > non-deduplicated index tuples as proposed by Noah Misch\n> > >\n> > > Speed measurements on my laptop using the exact method recommended by Noah\n> > > upthread:\n> > > Current master branch: checkunique off: 144s, checkunique on: 419s\n> > > With patch 0001: checkunique off: 141s, checkunique on: 171s\n> >\n> > Where is the CPU time going to make it still be 21% slower w/ checkunique on?\n> > It's a great improvement vs. current master, but I don't have an obvious\n> > explanation for the remaining +21%.\n>\n> I think there is at least extra index tuples comparison.\n\nThe revised patchset is attached. I applied cosmetical changes. I'm\ngoing to push it if no objections.\n\nI don't post the patch with rename of new option. It doesn't seem\nthere is a consensus. I must admit that keeping all the options in\nthe same naming convention makes sense.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Fri, 10 May 2024 03:11:41 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The revised patchset is attached. I applied cosmetical changes. I'm\n> going to push it if no objections.\n\nIs this really suitable material to be pushing post-feature-freeze?\nIt doesn't look like it's fixing any new-in-v17 issues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 May 2024 20:42:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Tom!\n\nOn Fri, 10 May 2024, 04:43 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > The revised patchset is attached. I applied cosmetical changes. I'm\n> > going to push it if no objections.\n>\n> Is this really suitable material to be pushing post-feature-freeze?\n> It doesn't look like it's fixing any new-in-v17 issues.\n>\n> regards, tom lane\n>\n\nI think these patches are nice-to-have optimizations and refactorings to\nmake code look better. They are not necessary for the main feature. They\ndon't fix any bugs. But they were requested in the thread, and make sense\nin my opinion.\n\nI really don't know what's the policy of applying code improvements other\nthan bugfixes post feature-freeze. IMO they are safe to be appiled to v17,\nbut they also could be added later.\n\nRegards,\nPavel Borisov\nSupabase\n\n>\n\nHi, Tom!On Fri, 10 May 2024, 04:43 Tom Lane, <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The revised patchset is attached.  I applied cosmetical changes.  I'm\n> going to push it if no objections.\n\nIs this really suitable material to be pushing post-feature-freeze?\nIt doesn't look like it's fixing any new-in-v17 issues.\n\n                        regards, tom laneI think these patches are nice-to-have optimizations and refactorings to make code look better. They are not necessary for the main feature. They don't fix any bugs. But they were requested in the thread, and make sense in my opinion. I really don't know what's the policy of applying code improvements other than bugfixes post feature-freeze. IMO they are safe to be appiled to v17, but they also could be added later.Regards, Pavel BorisovSupabase", "msg_date": "Fri, 10 May 2024 11:34:25 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > The revised patchset is attached. I applied cosmetical changes. I'm\n> > going to push it if no objections.\n>\n> Is this really suitable material to be pushing post-feature-freeze?\n> It doesn't look like it's fixing any new-in-v17 issues.\n\nThese are code improvements to the 5ae2087202, which answer critics in\nthe thread. 0001 comprises an optimization, but it's rather small and\nsimple. 0002 and 0003 contain refactoring. 0004 contains better\nerror reporting. For me this looks like pretty similar to what others\ncommit post-FF, isn't it?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 10 May 2024 11:39:18 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Alexander!\n\nOn Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > The revised patchset is attached. I applied cosmetical changes. I'm\n> > > going to push it if no objections.\n> >\n> > Is this really suitable material to be pushing post-feature-freeze?\n> > It doesn't look like it's fixing any new-in-v17 issues.\n>\n> These are code improvements to the 5ae2087202, which answer critics in\n> the thread. 0001 comprises an optimization, but it's rather small and\n> simple. 0002 and 0003 contain refactoring. 0004 contains better\n> error reporting. For me this looks like pretty similar to what others\n> commit post-FF, isn't it?\n>\nI've re-checked patches v2. Differences from v1 are in improving\nnaming/pgindent's/commit messages.\nIn 0002 order of variables in struct BtreeLastVisibleEntry changed.\nThis doesn't change code behavior.\n\nPatch v2-0003 doesn't contain credits and a discussion link. All other\npatches do.\n\nOverall, patches contain small performance optimization (0001), code\nrefactoring and error reporting changes. IMO they could be pushed post-FF.\n\nRegards,\nPavel.\n\nHi, Alexander!On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > The revised patchset is attached.  I applied cosmetical changes.  I'm\n> > going to push it if no objections.\n>\n> Is this really suitable material to be pushing post-feature-freeze?\n> It doesn't look like it's fixing any new-in-v17 issues.\n\nThese are code improvements to the 5ae2087202, which answer critics in\nthe thread.  0001 comprises an optimization, but it's rather small and\nsimple.  0002 and 0003 contain refactoring.  0004 contains better\nerror reporting.  For me this looks like pretty similar to what others\ncommit post-FF, isn't it?I've re-checked patches v2. Differences from v1 are in improving naming/pgindent's/commit messages.In 0002 order of variables in struct BtreeLastVisibleEntry changed. This doesn't change code behavior.Patch v2-0003 doesn't contain credits and a discussion link. All other patches do. Overall, patches contain small performance optimization (0001), code refactoring and error reporting changes. IMO they could be pushed post-FF.Regards,Pavel.", "msg_date": "Fri, 10 May 2024 16:10:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 10, 2024, at 5:10 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Hi, Alexander!\n> \n> On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > The revised patchset is attached. I applied cosmetical changes. I'm\n> > > going to push it if no objections.\n> >\n> > Is this really suitable material to be pushing post-feature-freeze?\n> > It doesn't look like it's fixing any new-in-v17 issues.\n> \n> These are code improvements to the 5ae2087202, which answer critics in\n> the thread. 0001 comprises an optimization, but it's rather small and\n> simple. 0002 and 0003 contain refactoring. 0004 contains better\n> error reporting. For me this looks like pretty similar to what others\n> commit post-FF, isn't it?\n> I've re-checked patches v2. Differences from v1 are in improving naming/pgindent's/commit messages.\n> In 0002 order of variables in struct BtreeLastVisibleEntry changed. This doesn't change code behavior.\n> \n> Patch v2-0003 doesn't contain credits and a discussion link. All other patches do. \n> \n> Overall, patches contain small performance optimization (0001), code refactoring and error reporting changes. IMO they could be pushed post-FF.\n\nv2-0001's commit message itself says, \"This commit implements skipping keys\". I take no position on the correctness or value of the improvement, but it seems out of scope post feature freeze. The patch seems to postpone uniqueness checking until later in the scan than what the prior version did, and that kind of change could require more analysis than we have time for at this point in the release cycle.\n\n\nv2-0002 does appear to just be refactoring. I don't care for a small portion of that patch, but I doubt it violates the post feature freeze rules. In particular:\n\n + BtreeLastVisibleEntry lVis = {InvalidBlockNumber, InvalidOffsetNumber, -1, NULL};\n\n\nv2-0003 may be an improvement in some way, but it compounds some preexisting confusion also. There is already a member of the BtreeCheckState called \"target\" and a memory context in that struct called \"targetcontext\". That context is used to allocate pages \"state->target\", \"rightpage\", \"child\" and \"page\", but not \"metapage\". Perhaps \"targetcontext\" is a poor choice of name? \"notmetacontext\" is a terrible name, but closer to describing the purpose of the memory context. Care to propose something sensible?\n\nPrior to applying v2-0003, the rightpage was stored in state->target, and continued to be in state->target later when entering\n\n /*\n * * Downlink check *\n * \n * Additional check of child items iff this is an internal page and\n * caller holds a ShareLock. This happens for every downlink (item)\n * in target excluding the negative-infinity downlink (again, this is\n * because it has no useful value to compare).\n */ \n if (!P_ISLEAF(topaque) && state->readonly)\n bt_child_check(state, skey, offset);\n\nand thereafter. Now, the rightpage of state->target is created, checked, and free'd, and then the old state->target gets processed in the downlink check and thereafter. This is either introducing a bug, or fixing one, but the commit message is totally ambiguous about whether this is a bugfix or a code cleanup or something else? I think this kind of patch should have a super clear commit message about what it thinks it is doing.\n\n\nv2-0004 guards against a real threat, and is reasonable post feature freeze\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 10 May 2024 10:35:13 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Mark!\n\n\nOn Fri, 10 May 2024, 21:35 Mark Dilger, <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On May 10, 2024, at 5:10 AM, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >\n> > Hi, Alexander!\n> >\n> > On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > > The revised patchset is attached. I applied cosmetical changes. I'm\n> > > > going to push it if no objections.\n> > >\n> > > Is this really suitable material to be pushing post-feature-freeze?\n> > > It doesn't look like it's fixing any new-in-v17 issues.\n> >\n> > These are code improvements to the 5ae2087202, which answer critics in\n> > the thread. 0001 comprises an optimization, but it's rather small and\n> > simple. 0002 and 0003 contain refactoring. 0004 contains better\n> > error reporting. For me this looks like pretty similar to what others\n> > commit post-FF, isn't it?\n> > I've re-checked patches v2. Differences from v1 are in improving\n> naming/pgindent's/commit messages.\n> > In 0002 order of variables in struct BtreeLastVisibleEntry changed. This\n> doesn't change code behavior.\n> >\n> > Patch v2-0003 doesn't contain credits and a discussion link. All other\n> patches do.\n> >\n> > Overall, patches contain small performance optimization (0001), code\n> refactoring and error reporting changes. IMO they could be pushed post-FF.\n>\n> v2-0001's commit message itself says, \"This commit implements skipping\n> keys\". I take no position on the correctness or value of the improvement,\n> but it seems out of scope post feature freeze. The patch seems to postpone\n> uniqueness checking until later in the scan than what the prior version\n> did, and that kind of change could require more analysis than we have time\n> for at this point in the release cycle.\n>\n>\n> v2-0002 does appear to just be refactoring. I don't care for a small\n> portion of that patch, but I doubt it violates the post feature freeze\n> rules. In particular:\n>\n> + BtreeLastVisibleEntry lVis = {InvalidBlockNumber,\n> InvalidOffsetNumber, -1, NULL};\n>\n>\n> v2-0003 may be an improvement in some way, but it compounds some\n> preexisting confusion also. There is already a member of the\n> BtreeCheckState called \"target\" and a memory context in that struct called\n> \"targetcontext\". That context is used to allocate pages \"state->target\",\n> \"rightpage\", \"child\" and \"page\", but not \"metapage\". Perhaps\n> \"targetcontext\" is a poor choice of name? \"notmetacontext\" is a terrible\n> name, but closer to describing the purpose of the memory context. Care to\n> propose something sensible?\n>\n> Prior to applying v2-0003, the rightpage was stored in state->target, and\n> continued to be in state->target later when entering\n>\n> /*\n> * * Downlink check *\n> *\n> * Additional check of child items iff this is an internal page and\n> * caller holds a ShareLock. This happens for every downlink\n> (item)\n> * in target excluding the negative-infinity downlink (again, this\n> is\n> * because it has no useful value to compare).\n> */\n> if (!P_ISLEAF(topaque) && state->readonly)\n> bt_child_check(state, skey, offset);\n>\n> and thereafter. Now, the rightpage of state->target is created, checked,\n> and free'd, and then the old state->target gets processed in the downlink\n> check and thereafter. This is either introducing a bug, or fixing one, but\n> the commit message is totally ambiguous about whether this is a bugfix or a\n> code cleanup or something else? I think this kind of patch should have a\n> super clear commit message about what it thinks it is doing.\n>\n>\n> v2-0004 guards against a real threat, and is reasonable post feature freeze\n>\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nIMO 0003 doesn't introduce nor fixes a bug. It loads rightpage into a local\nvariable, rather that to a BtreeCheckState that can have another users of\nstate->target afterb uniqueness check in the future, but don't have now. So\nthe original patch is correct, and the goal of this refactoring is to untie\nrightpage fron state structure as it's used only transiently for cross-page\nunuque check. It's the same style as already used bt_right_page_check_scankey()\nthat loads rightpage into a local variable.\n\nFor 0002 I doubt I understand your actual positiob. Could you explain what\nit violates or doesn't violate?\n\nBest regards,\nPavel.\n\nHi, Mark!On Fri, 10 May 2024, 21:35 Mark Dilger, <mark.dilger@enterprisedb.com> wrote:\n\n> On May 10, 2024, at 5:10 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Hi, Alexander!\n> \n> On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > The revised patchset is attached.  I applied cosmetical changes.  I'm\n> > > going to push it if no objections.\n> >\n> > Is this really suitable material to be pushing post-feature-freeze?\n> > It doesn't look like it's fixing any new-in-v17 issues.\n> \n> These are code improvements to the 5ae2087202, which answer critics in\n> the thread.  0001 comprises an optimization, but it's rather small and\n> simple.  0002 and 0003 contain refactoring.  0004 contains better\n> error reporting.  For me this looks like pretty similar to what others\n> commit post-FF, isn't it?\n> I've re-checked patches v2. Differences from v1 are in improving naming/pgindent's/commit messages.\n> In 0002 order of variables in struct BtreeLastVisibleEntry changed. This doesn't change code behavior.\n> \n> Patch v2-0003 doesn't contain credits and a discussion link. All other patches do. \n> \n> Overall, patches contain small performance optimization (0001), code refactoring and error reporting changes. IMO they could be pushed post-FF.\n\nv2-0001's commit message itself says, \"This commit implements skipping keys\". I take no position on the correctness or value of the improvement, but it seems out of scope post feature freeze.  The patch seems to postpone uniqueness checking until later in the scan than what the prior version did, and that kind of change could require more analysis than we have time for at this point in the release cycle.\n\n\nv2-0002 does appear to just be refactoring.  I don't care for a small portion of that patch, but I doubt it violates the post feature freeze rules.  In particular:\n\n  +   BtreeLastVisibleEntry lVis = {InvalidBlockNumber, InvalidOffsetNumber, -1, NULL};\n\n\nv2-0003 may be an improvement in some way, but it compounds some preexisting confusion also.  There is already a member of the BtreeCheckState called \"target\" and a memory context in that struct called \"targetcontext\".  That context is used to allocate pages \"state->target\", \"rightpage\", \"child\" and \"page\", but not \"metapage\".  Perhaps \"targetcontext\" is a poor choice of name?  \"notmetacontext\" is a terrible name, but closer to describing the purpose of the memory context.  Care to propose something sensible?\n\nPrior to applying v2-0003, the rightpage was stored in state->target, and continued to be in state->target later when entering\n\n        /*\n         * * Downlink check *\n         *  \n         * Additional check of child items iff this is an internal page and\n         * caller holds a ShareLock.  This happens for every downlink (item)\n         * in target excluding the negative-infinity downlink (again, this is\n         * because it has no useful value to compare).\n         */ \n        if (!P_ISLEAF(topaque) && state->readonly)\n            bt_child_check(state, skey, offset);\n\nand thereafter.  Now, the rightpage of state->target is created, checked, and free'd, and then the old state->target gets processed in the downlink check and thereafter.  This is either introducing a bug, or fixing one, but the commit message is totally ambiguous about whether this is a bugfix or a code cleanup or something else?  I think this kind of patch should have a super clear commit message about what it thinks it is doing.\n\n\nv2-0004 guards against a real threat, and is reasonable post feature freeze\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyIMO 0003 doesn't introduce nor fixes a bug. It loads rightpage into a local variable, rather that to a BtreeCheckState that can have another users of state->target afterb uniqueness check in the future, but don't have now. So the original patch is correct, and the goal of this refactoring is to untie rightpage fron state structure as it's used only transiently for cross-page unuque check. It's the same style as already used bt_right_page_check_scankey() that loads rightpage into a local variable.For 0002 I doubt I understand your actual positiob. Could you explain what it violates or doesn't violate?Best regards,Pavel.", "msg_date": "Fri, 10 May 2024 22:42:01 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, 10 May 2024, 22:42 Pavel Borisov, <pashkin.elfe@gmail.com> wrote:\n\n> Hi, Mark!\n>\n>\n> On Fri, 10 May 2024, 21:35 Mark Dilger, <mark.dilger@enterprisedb.com>\n> wrote:\n>\n>>\n>>\n>> > On May 10, 2024, at 5:10 AM, Pavel Borisov <pashkin.elfe@gmail.com>\n>> wrote:\n>> >\n>> > Hi, Alexander!\n>> >\n>> > On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com>\n>> wrote:\n>> > On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > > Alexander Korotkov <aekorotkov@gmail.com> writes:\n>> > > > The revised patchset is attached. I applied cosmetical changes.\n>> I'm\n>> > > > going to push it if no objections.\n>> > >\n>> > > Is this really suitable material to be pushing post-feature-freeze?\n>> > > It doesn't look like it's fixing any new-in-v17 issues.\n>> >\n>> > These are code improvements to the 5ae2087202, which answer critics in\n>> > the thread. 0001 comprises an optimization, but it's rather small and\n>> > simple. 0002 and 0003 contain refactoring. 0004 contains better\n>> > error reporting. For me this looks like pretty similar to what others\n>> > commit post-FF, isn't it?\n>> > I've re-checked patches v2. Differences from v1 are in improving\n>> naming/pgindent's/commit messages.\n>> > In 0002 order of variables in struct BtreeLastVisibleEntry changed.\n>> This doesn't change code behavior.\n>> >\n>> > Patch v2-0003 doesn't contain credits and a discussion link. All other\n>> patches do.\n>> >\n>> > Overall, patches contain small performance optimization (0001), code\n>> refactoring and error reporting changes. IMO they could be pushed post-FF.\n>>\n>> v2-0001's commit message itself says, \"This commit implements skipping\n>> keys\". I take no position on the correctness or value of the improvement,\n>> but it seems out of scope post feature freeze. The patch seems to postpone\n>> uniqueness checking until later in the scan than what the prior version\n>> did, and that kind of change could require more analysis than we have time\n>> for at this point in the release cycle.\n>>\n>>\n>> v2-0002 does appear to just be refactoring. I don't care for a small\n>> portion of that patch, but I doubt it violates the post feature freeze\n>> rules. In particular:\n>>\n>> + BtreeLastVisibleEntry lVis = {InvalidBlockNumber,\n>> InvalidOffsetNumber, -1, NULL};\n>>\n>>\n>> v2-0003 may be an improvement in some way, but it compounds some\n>> preexisting confusion also. There is already a member of the\n>> BtreeCheckState called \"target\" and a memory context in that struct called\n>> \"targetcontext\". That context is used to allocate pages \"state->target\",\n>> \"rightpage\", \"child\" and \"page\", but not \"metapage\". Perhaps\n>> \"targetcontext\" is a poor choice of name? \"notmetacontext\" is a terrible\n>> name, but closer to describing the purpose of the memory context. Care to\n>> propose something sensible?\n>>\n>> Prior to applying v2-0003, the rightpage was stored in state->target, and\n>> continued to be in state->target later when entering\n>>\n>> /*\n>> * * Downlink check *\n>> *\n>> * Additional check of child items iff this is an internal page\n>> and\n>> * caller holds a ShareLock. This happens for every downlink\n>> (item)\n>> * in target excluding the negative-infinity downlink (again,\n>> this is\n>> * because it has no useful value to compare).\n>> */\n>> if (!P_ISLEAF(topaque) && state->readonly)\n>> bt_child_check(state, skey, offset);\n>>\n>> and thereafter. Now, the rightpage of state->target is created, checked,\n>> and free'd, and then the old state->target gets processed in the downlink\n>> check and thereafter. This is either introducing a bug, or fixing one, but\n>> the commit message is totally ambiguous about whether this is a bugfix or a\n>> code cleanup or something else? I think this kind of patch should have a\n>> super clear commit message about what it thinks it is doing.\n>>\n>>\n>> v2-0004 guards against a real threat, and is reasonable post feature\n>> freeze\n>>\n>>\n>> —\n>> Mark Dilger\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n> IMO 0003 doesn't introduce nor fixes a bug. It loads rightpage into a\n> local variable, rather that to a BtreeCheckState that can have another\n> users of state->target afterb uniqueness check in the future, but don't\n> have now. So the original patch is correct, and the goal of this\n> refactoring is to untie rightpage fron state structure as it's used only\n> transiently for cross-page unuque check. It's the same style as already\n> used bt_right_page_check_scankey() that loads rightpage into a local\n> variable.\n>\n> For 0002 I doubt I understand your actual positiob. Could you explain what\n> it violates or doesn't violate?\n>\nPlease forgive many typos in the previous message, I wrote from phone.\n\nPavel.\n\n>\n\nOn Fri, 10 May 2024, 22:42 Pavel Borisov, <pashkin.elfe@gmail.com> wrote:Hi, Mark!On Fri, 10 May 2024, 21:35 Mark Dilger, <mark.dilger@enterprisedb.com> wrote:\n\n> On May 10, 2024, at 5:10 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Hi, Alexander!\n> \n> On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > The revised patchset is attached.  I applied cosmetical changes.  I'm\n> > > going to push it if no objections.\n> >\n> > Is this really suitable material to be pushing post-feature-freeze?\n> > It doesn't look like it's fixing any new-in-v17 issues.\n> \n> These are code improvements to the 5ae2087202, which answer critics in\n> the thread.  0001 comprises an optimization, but it's rather small and\n> simple.  0002 and 0003 contain refactoring.  0004 contains better\n> error reporting.  For me this looks like pretty similar to what others\n> commit post-FF, isn't it?\n> I've re-checked patches v2. Differences from v1 are in improving naming/pgindent's/commit messages.\n> In 0002 order of variables in struct BtreeLastVisibleEntry changed. This doesn't change code behavior.\n> \n> Patch v2-0003 doesn't contain credits and a discussion link. All other patches do. \n> \n> Overall, patches contain small performance optimization (0001), code refactoring and error reporting changes. IMO they could be pushed post-FF.\n\nv2-0001's commit message itself says, \"This commit implements skipping keys\". I take no position on the correctness or value of the improvement, but it seems out of scope post feature freeze.  The patch seems to postpone uniqueness checking until later in the scan than what the prior version did, and that kind of change could require more analysis than we have time for at this point in the release cycle.\n\n\nv2-0002 does appear to just be refactoring.  I don't care for a small portion of that patch, but I doubt it violates the post feature freeze rules.  In particular:\n\n  +   BtreeLastVisibleEntry lVis = {InvalidBlockNumber, InvalidOffsetNumber, -1, NULL};\n\n\nv2-0003 may be an improvement in some way, but it compounds some preexisting confusion also.  There is already a member of the BtreeCheckState called \"target\" and a memory context in that struct called \"targetcontext\".  That context is used to allocate pages \"state->target\", \"rightpage\", \"child\" and \"page\", but not \"metapage\".  Perhaps \"targetcontext\" is a poor choice of name?  \"notmetacontext\" is a terrible name, but closer to describing the purpose of the memory context.  Care to propose something sensible?\n\nPrior to applying v2-0003, the rightpage was stored in state->target, and continued to be in state->target later when entering\n\n        /*\n         * * Downlink check *\n         *  \n         * Additional check of child items iff this is an internal page and\n         * caller holds a ShareLock.  This happens for every downlink (item)\n         * in target excluding the negative-infinity downlink (again, this is\n         * because it has no useful value to compare).\n         */ \n        if (!P_ISLEAF(topaque) && state->readonly)\n            bt_child_check(state, skey, offset);\n\nand thereafter.  Now, the rightpage of state->target is created, checked, and free'd, and then the old state->target gets processed in the downlink check and thereafter.  This is either introducing a bug, or fixing one, but the commit message is totally ambiguous about whether this is a bugfix or a code cleanup or something else?  I think this kind of patch should have a super clear commit message about what it thinks it is doing.\n\n\nv2-0004 guards against a real threat, and is reasonable post feature freeze\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyIMO 0003 doesn't introduce nor fixes a bug. It loads rightpage into a local variable, rather that to a BtreeCheckState that can have another users of state->target afterb uniqueness check in the future, but don't have now. So the original patch is correct, and the goal of this refactoring is to untie rightpage fron state structure as it's used only transiently for cross-page unuque check. It's the same style as already used bt_right_page_check_scankey() that loads rightpage into a local variable.For 0002 I doubt I understand your actual positiob. Could you explain what it violates or doesn't violate?Please forgive many typos in the previous message, I wrote from phone.Pavel.", "msg_date": "Fri, 10 May 2024 22:43:46 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, May 10, 2024 at 8:35 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On May 10, 2024, at 5:10 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> > On Fri, 10 May 2024 at 12:39, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Fri, May 10, 2024 at 3:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > > > The revised patchset is attached. I applied cosmetical changes. I'm\n> > > > going to push it if no objections.\n> > >\n> > > Is this really suitable material to be pushing post-feature-freeze?\n> > > It doesn't look like it's fixing any new-in-v17 issues.\n> >\n> > These are code improvements to the 5ae2087202, which answer critics in\n> > the thread. 0001 comprises an optimization, but it's rather small and\n> > simple. 0002 and 0003 contain refactoring. 0004 contains better\n> > error reporting. For me this looks like pretty similar to what others\n> > commit post-FF, isn't it?\n> > I've re-checked patches v2. Differences from v1 are in improving naming/pgindent's/commit messages.\n> > In 0002 order of variables in struct BtreeLastVisibleEntry changed. This doesn't change code behavior.\n> >\n> > Patch v2-0003 doesn't contain credits and a discussion link. All other patches do.\n> >\n> > Overall, patches contain small performance optimization (0001), code refactoring and error reporting changes. IMO they could be pushed post-FF.\n>\n> v2-0001's commit message itself says, \"This commit implements skipping keys\". I take no position on the correctness or value of the improvement, but it seems out of scope post feature freeze. The patch seems to postpone uniqueness checking until later in the scan than what the prior version did, and that kind of change could require more analysis than we have time for at this point in the release cycle.\n\nFormally this could be classified as algorithmic change and probably\nshould be postponed to the next release. But that's quite local\noptimization, which just postpones a function call within the same\niteration of loop. And the effect is huge. Probably we could allow\nthis post-FF in the sake of quality release, given it's very local\nchange with a huge effect.\n\n> v2-0002 does appear to just be refactoring. I don't care for a small portion of that patch, but I doubt it violates the post feature freeze rules. In particular:\n>\n> + BtreeLastVisibleEntry lVis = {InvalidBlockNumber, InvalidOffsetNumber, -1, NULL};\n\nI don't understand what is the problem with this line and post feature\nfreeze rules. Please, explain it more.\n\n> v2-0003 may be an improvement in some way, but it compounds some preexisting confusion also. There is already a member of the BtreeCheckState called \"target\" and a memory context in that struct called \"targetcontext\". That context is used to allocate pages \"state->target\", \"rightpage\", \"child\" and \"page\", but not \"metapage\". Perhaps \"targetcontext\" is a poor choice of name? \"notmetacontext\" is a terrible name, but closer to describing the purpose of the memory context. Care to propose something sensible?\n>\n> Prior to applying v2-0003, the rightpage was stored in state->target, and continued to be in state->target later when entering\n>\n> /*\n> * * Downlink check *\n> *\n> * Additional check of child items iff this is an internal page and\n> * caller holds a ShareLock. This happens for every downlink (item)\n> * in target excluding the negative-infinity downlink (again, this is\n> * because it has no useful value to compare).\n> */\n> if (!P_ISLEAF(topaque) && state->readonly)\n> bt_child_check(state, skey, offset);\n>\n> and thereafter. Now, the rightpage of state->target is created, checked, and free'd, and then the old state->target gets processed in the downlink check and thereafter. This is either introducing a bug, or fixing one, but the commit message is totally ambiguous about whether this is a bugfix or a code cleanup or something else? I think this kind of patch should have a super clear commit message about what it thinks it is doing.\n\nThe only bt_target_page_check() caller is\nbt_check_level_from_leftmost(), which overrides state->target in the\nnext iteration anyway. I think the patch is just refactoring to\neliminate the confusion pointer by Peter Geoghegan upthread.\n\n0002 and 0003 don't address any bugs, but It would be very nice to\naccept them, because it would simplify future backpatching in this\narea.\n\n> v2-0004 guards against a real threat, and is reasonable post feature freeze\n\nOk.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 10 May 2024 22:05:01 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 10, 2024, at 11:42 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> IMO 0003 doesn't introduce nor fixes a bug. It loads rightpage into a local variable, rather that to a BtreeCheckState that can have another users of state->target afterb uniqueness check in the future, but don't have now. So the original patch is correct, and the goal of this refactoring is to untie rightpage fron state structure as it's used only transiently for cross-page unuque check. It's the same style as already used bt_right_page_check_scankey() that loads rightpage into a local variable.\n\nWell, you can put an Assert(false) dead in the middle of the code we're discussing and all the regression tests still pass, so I'd argue the change is getting zero test coverage.\n\nThis patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did. That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch. Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used. So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else? Is this a bug being introduced, being fixed, or ... ?\n\nHaving a regression test that actually touches this code would go a fair way towards helping the analysis.\n\n> For 0002 I doubt I understand your actual positiob. Could you explain what it violates or doesn't violate?\n\nv2-0002 is does not violate the post feature freeze restriction on new features so far as I can tell, but I just don't care for the variable initialization because it doesn't name the fields. If anybody refactored the struct they might not notice that the need to reorder this initialization, and depending on various factors including compiler flags they might not get an error.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 10 May 2024 12:27:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> \n> The only bt_target_page_check() caller is\n> bt_check_level_from_leftmost(), which overrides state->target in the\n> next iteration anyway. I think the patch is just refactoring to\n> eliminate the confusion pointer by Peter Geoghegan upthread.\n\nI find your argument unconvincing.\n\nAfter bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target. See line 963.\n\nI'm left with four possibilities:\n\n\n1) bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n\n2) The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963. The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n\n3) The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that. \n\n4) It's been broken all along and your patch just changes from wrong to wrong.\n\n\nIf you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it. Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is. I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other. I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n\nIf (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix. Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n\nIf (3) is true, then I'm complaining that the patch is flat busted.\n\nIf (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n\nRegardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n\n\nFor reference, I said something similar earlier today in another email to this thread:\n\nThis patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did. That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch. Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used. So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else? Is this a bug being introduced, being fixed, or ... ?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 10 May 2024 18:12:57 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n\nIndeed. If we have no regression tests that reach this code, it's\nfolly to touch it at all, but most especially so post-feature-freeze.\n\nI think the *first* order of business ought to be to create some\ntest cases that reach this area. Perhaps they'll be too expensive\nto incorporate in our regular regression tests, but we could still\nuse them to investigate Mark's concerns.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 May 2024 21:38:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > The only bt_target_page_check() caller is\n> > bt_check_level_from_leftmost(), which overrides state->target in the\n> > next iteration anyway. I think the patch is just refactoring to\n> > eliminate the confusion pointer by Peter Geoghegan upthread.\n>\n> I find your argument unconvincing.\n>\n> After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target. See line 963.\n>\n> I'm left with four possibilities:\n>\n>\n> 1) bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n>\n> 2) The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963. The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n>\n> 3) The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n>\n> 4) It's been broken all along and your patch just changes from wrong to wrong.\n>\n>\n> If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it. Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is. I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other. I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n>\n> If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix. Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n>\n> If (3) is true, then I'm complaining that the patch is flat busted.\n>\n> If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n>\n> Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n>\n>\n> For reference, I said something similar earlier today in another email to this thread:\n>\n> This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did. That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch. Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used. So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else? Is this a bug being introduced, being fixed, or ... ?\n\nThank you for your analysis. I'm inclined to believe in 2, but not\nyet completely sure. It's really pity that our tests don't cover\nthis. I'm investigating this area.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 13 May 2024 00:23:42 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > The only bt_target_page_check() caller is\n> > > bt_check_level_from_leftmost(), which overrides state->target in the\n> > > next iteration anyway. I think the patch is just refactoring to\n> > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> >\n> > I find your argument unconvincing.\n> >\n> > After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target. See line 963.\n> >\n> > I'm left with four possibilities:\n> >\n> >\n> > 1) bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n> >\n> > 2) The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963. The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n> >\n> > 3) The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> >\n> > 4) It's been broken all along and your patch just changes from wrong to wrong.\n> >\n> >\n> > If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it. Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is. I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other. I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n> >\n> > If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix. Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n> >\n> > If (3) is true, then I'm complaining that the patch is flat busted.\n> >\n> > If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n> >\n> > Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n> >\n> >\n> > For reference, I said something similar earlier today in another email to this thread:\n> >\n> > This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did. That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch. Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used. So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else? Is this a bug being introduced, being fixed, or ... ?\n>\n> Thank you for your analysis. I'm inclined to believe in 2, but not\n> yet completely sure. It's really pity that our tests don't cover\n> this. I'm investigating this area.\n\nIt seems that I got to the bottom of this. Changing\nBtreeCheckState.target for a cross-page unique constraint check is\nwrong, but that happens only for leaf pages. After that\nBtreeCheckState.target is only used for setting the low key. The low\nkey is only used for non-leaf pages. So, that didn't lead to any\nvisible bug. I've revised the commit message to reflect this.\n\nSo, the picture for the patches is the following now.\n0001 – optimization, but rather simple and giving huge effect\n0002 – refactoring\n0003 – fix for the bug\n0004 – better error reporting\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Mon, 13 May 2024 04:42:20 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Alexander!\n\nOn Mon, 13 May 2024 at 05:42, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> > > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <\n> aekorotkov@gmail.com> wrote:\n> > > > The only bt_target_page_check() caller is\n> > > > bt_check_level_from_leftmost(), which overrides state->target in the\n> > > > next iteration anyway. I think the patch is just refactoring to\n> > > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> > >\n> > > I find your argument unconvincing.\n> > >\n> > > After bt_target_page_check() returns at line 919, and before\n> bt_check_level_from_leftmost() overrides state->target in the next\n> iteration, bt_check_level_from_leftmost() conditionally fetches an item\n> from the page referenced by state->target. See line 963.\n> > >\n> > > I'm left with four possibilities:\n> > >\n> > >\n> > > 1) bt_target_page_check() never gets to the code that uses\n> \"rightpage\" rather than \"state->target\" in the same iteration where\n> bt_check_level_from_leftmost() conditionally fetches an item from\n> state->target, so the change you're making doesn't matter.\n> > >\n> > > 2) The code prior to v2-0003 was wrong, having changed state->target\n> in an inappropriate way, causing the wrong thing to happen at what is now\n> line 963. The patch fixes the bug, because state->target no longer gets\n> overwritten where you are now using \"rightpage\" for the value.\n> > >\n> > > 3) The code used to work, having set up state->target correctly in\n> the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> > >\n> > > 4) It's been broken all along and your patch just changes from wrong\n> to wrong.\n> > >\n> > >\n> > > If you believe (1) is true, then I'm complaining that you are relying\n> far to much on action at a distance, and that you are not documenting it.\n> Even with documentation of this interrelationship, I'd be unhappy with how\n> brittle the code is. I cannot easily discern that the two don't ever\n> happen in the same iteration, and I'm not at all convinced one way or the\n> other. I tried to set up some Asserts about that, but none of the test\n> cases actually reach the new code, so adding Asserts doesn't help to\n> investigate the question.\n> > >\n> > > If (2) is true, then I'm complaining that the commit message doesn't\n> mention the fact that this is a bug fix. Bug fixes should be clearly\n> documented as such, otherwise future work might assume the commit can be\n> reverted with only stylistic consequences.\n> > >\n> > > If (3) is true, then I'm complaining that the patch is flat busted.\n> > >\n> > > If (4) is true, then maybe we should revert the entire feature, or\n> have a discussion of mitigation efforts that are needed.\n> > >\n> > > Regardless of which of 1..4 you pick, I think it could all do with\n> more regression test coverage.\n> > >\n> > >\n> > > For reference, I said something similar earlier today in another email\n> to this thread:\n> > >\n> > > This patch introduces a change that stores a new page into variable\n> \"rightpage\" rather than overwriting \"state->target\", which the old\n> implementation most certainly did. That means that after returning from\n> bt_target_page_check() into the calling function\n> bt_check_level_from_leftmost() the value in state->target is not what it\n> would have been prior to this patch. Now, that'd be irrelevant if nobody\n> goes on to consult that value, but just 44 lines further down in\n> bt_check_level_from_leftmost() state->target is clearly used. So the\n> behavior at that point is changing between the old and new versions of the\n> code, and I think I'm within reason to ask if it was wrong before the\n> patch, wrong after the patch, or something else? Is this a bug being\n> introduced, being fixed, or ... ?\n> >\n> > Thank you for your analysis. I'm inclined to believe in 2, but not\n> > yet completely sure. It's really pity that our tests don't cover\n> > this. I'm investigating this area.\n>\n> It seems that I got to the bottom of this. Changing\n> BtreeCheckState.target for a cross-page unique constraint check is\n> wrong, but that happens only for leaf pages. After that\n> BtreeCheckState.target is only used for setting the low key. The low\n> key is only used for non-leaf pages. So, that didn't lead to any\n> visible bug. I've revised the commit message to reflect this.\n>\n\nI agree with your analysis regarding state->target:\n- when the unique check is on, state->target was reassigned only for the\nleaf pages (under P_ISLEAF(topaque) in bt_target_page_check).\n- in this level (leaf) in bt_check_level_from_leftmost() this value of\nstate->target was used to get state->lowkey. Then it was reset (in the next\niteration of do loop in in bt_check_level_from_leftmost()\n- state->lowkey lives until the end of pages level (leaf) iteration cycle.\nThen, low-key is reset (state->lowkey = NULL in the end of\n bt_check_level_from_leftmost())\n- state->lowkey is used only in bt_child_check/bt_child_highkey_check. Both\nare called only from non-leaf pages iteration cycles (under\nP_ISLEAF(topaque))\n- Also there is a check (rightblock_number != P_NONE) in before getting\nrightpage into state->target in bt_target_page_check() that ensures us that\nrightpage indeed exists and getting this (unused) lowkey in\nbt_check_level_from_leftmost will not invoke any page reading errors.\n\nI'm pretty sure that there was no bug in this, not just the bug was hidden.\n\nIndeed re-assigning state->target in leaf page iteration for cross-page\nunique check was not beautiful, and Peter pointed out this. In my opinion\nthe patch 0003 is a pure code refactoring.\n\nAs for the cross-page check regression/TAP testing, this test had problems\nsince the btree page layout is not fixed (especially it's different on\n32-bit arch). I had a variant for testing cross-page check when the test\nwas yet regression one upthread for both 32/64 bit architectures. I\nremember it was decided not to include it due to complications and low\nimpact for testing the corner case of very rare cross-page duplicates.\n(There were also suggestions to drop cross-page duplicates check at all,\nwhich I didn't agree 2 years ago, but still it can make sense)\n\nSeparately, I propose to avoid getting state->lowkey for leaf pages at all\nas it's unused. PFA is a simple patch for this. (I don't add it to the\ncurrent patch set as I believe it has nothing to do with UNIQUE constraint\ncheck, rather it improves the previous btree amcheck code)\n\nBest regards,\nPavel Borisov,\nSupabase", "msg_date": "Mon, 13 May 2024 15:55:11 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, 13 May 2024 at 15:55, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n> Hi, Alexander!\n>\n> On Mon, 13 May 2024 at 05:42, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n>\n>> On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n>> <aekorotkov@gmail.com> wrote:\n>> > On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n>> > <mark.dilger@enterprisedb.com> wrote:\n>> > > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <\n>> aekorotkov@gmail.com> wrote:\n>> > > > The only bt_target_page_check() caller is\n>> > > > bt_check_level_from_leftmost(), which overrides state->target in the\n>> > > > next iteration anyway. I think the patch is just refactoring to\n>> > > > eliminate the confusion pointer by Peter Geoghegan upthread.\n>> > >\n>> > > I find your argument unconvincing.\n>> > >\n>> > > After bt_target_page_check() returns at line 919, and before\n>> bt_check_level_from_leftmost() overrides state->target in the next\n>> iteration, bt_check_level_from_leftmost() conditionally fetches an item\n>> from the page referenced by state->target. See line 963.\n>> > >\n>> > > I'm left with four possibilities:\n>> > >\n>> > >\n>> > > 1) bt_target_page_check() never gets to the code that uses\n>> \"rightpage\" rather than \"state->target\" in the same iteration where\n>> bt_check_level_from_leftmost() conditionally fetches an item from\n>> state->target, so the change you're making doesn't matter.\n>> > >\n>> > > 2) The code prior to v2-0003 was wrong, having changed state->target\n>> in an inappropriate way, causing the wrong thing to happen at what is now\n>> line 963. The patch fixes the bug, because state->target no longer gets\n>> overwritten where you are now using \"rightpage\" for the value.\n>> > >\n>> > > 3) The code used to work, having set up state->target correctly in\n>> the place where you are now using \"rightpage\", but v2-0003 has broken that.\n>> > >\n>> > > 4) It's been broken all along and your patch just changes from wrong\n>> to wrong.\n>> > >\n>> > >\n>> > > If you believe (1) is true, then I'm complaining that you are relying\n>> far to much on action at a distance, and that you are not documenting it.\n>> Even with documentation of this interrelationship, I'd be unhappy with how\n>> brittle the code is. I cannot easily discern that the two don't ever\n>> happen in the same iteration, and I'm not at all convinced one way or the\n>> other. I tried to set up some Asserts about that, but none of the test\n>> cases actually reach the new code, so adding Asserts doesn't help to\n>> investigate the question.\n>> > >\n>> > > If (2) is true, then I'm complaining that the commit message doesn't\n>> mention the fact that this is a bug fix. Bug fixes should be clearly\n>> documented as such, otherwise future work might assume the commit can be\n>> reverted with only stylistic consequences.\n>> > >\n>> > > If (3) is true, then I'm complaining that the patch is flat busted.\n>> > >\n>> > > If (4) is true, then maybe we should revert the entire feature, or\n>> have a discussion of mitigation efforts that are needed.\n>> > >\n>> > > Regardless of which of 1..4 you pick, I think it could all do with\n>> more regression test coverage.\n>> > >\n>> > >\n>> > > For reference, I said something similar earlier today in another\n>> email to this thread:\n>> > >\n>> > > This patch introduces a change that stores a new page into variable\n>> \"rightpage\" rather than overwriting \"state->target\", which the old\n>> implementation most certainly did. That means that after returning from\n>> bt_target_page_check() into the calling function\n>> bt_check_level_from_leftmost() the value in state->target is not what it\n>> would have been prior to this patch. Now, that'd be irrelevant if nobody\n>> goes on to consult that value, but just 44 lines further down in\n>> bt_check_level_from_leftmost() state->target is clearly used. So the\n>> behavior at that point is changing between the old and new versions of the\n>> code, and I think I'm within reason to ask if it was wrong before the\n>> patch, wrong after the patch, or something else? Is this a bug being\n>> introduced, being fixed, or ... ?\n>> >\n>> > Thank you for your analysis. I'm inclined to believe in 2, but not\n>> > yet completely sure. It's really pity that our tests don't cover\n>> > this. I'm investigating this area.\n>>\n>> It seems that I got to the bottom of this. Changing\n>> BtreeCheckState.target for a cross-page unique constraint check is\n>> wrong, but that happens only for leaf pages. After that\n>> BtreeCheckState.target is only used for setting the low key. The low\n>> key is only used for non-leaf pages. So, that didn't lead to any\n>> visible bug. I've revised the commit message to reflect this.\n>>\n>\n> I agree with your analysis regarding state->target:\n> - when the unique check is on, state->target was reassigned only for the\n> leaf pages (under P_ISLEAF(topaque) in bt_target_page_check).\n> - in this level (leaf) in bt_check_level_from_leftmost() this value of\n> state->target was used to get state->lowkey. Then it was reset (in the next\n> iteration of do loop in in bt_check_level_from_leftmost()\n> - state->lowkey lives until the end of pages level (leaf) iteration cycle.\n> Then, low-key is reset (state->lowkey = NULL in the end of\n> bt_check_level_from_leftmost())\n> - state->lowkey is used only in bt_child_check/bt_child_highkey_check.\n> Both are called only from non-leaf pages iteration cycles (under\n> P_ISLEAF(topaque))\n> - Also there is a check (rightblock_number != P_NONE) in before getting\n> rightpage into state->target in bt_target_page_check() that ensures us that\n> rightpage indeed exists and getting this (unused) lowkey in\n> bt_check_level_from_leftmost will not invoke any page reading errors.\n>\n> I'm pretty sure that there was no bug in this, not just the bug was hidden.\n>\n> Indeed re-assigning state->target in leaf page iteration for cross-page\n> unique check was not beautiful, and Peter pointed out this. In my opinion\n> the patch 0003 is a pure code refactoring.\n>\n> As for the cross-page check regression/TAP testing, this test had problems\n> since the btree page layout is not fixed (especially it's different on\n> 32-bit arch). I had a variant for testing cross-page check when the test\n> was yet regression one upthread for both 32/64 bit architectures. I\n> remember it was decided not to include it due to complications and low\n> impact for testing the corner case of very rare cross-page duplicates.\n> (There were also suggestions to drop cross-page duplicates check at all,\n> which I didn't agree 2 years ago, but still it can make sense)\n>\n> Separately, I propose to avoid getting state->lowkey for leaf pages at all\n> as it's unused. PFA is a simple patch for this. (I don't add it to the\n> current patch set as I believe it has nothing to do with UNIQUE constraint\n> check, rather it improves the previous btree amcheck code)\n>\n\nA correction of a typo in previous message:\nnon-leaf pages iteration cycles (under !P_ISLEAF(topaque)) -> non-leaf\npages iteration cycles (under !P_ISLEAF(topaque))\n\nOn Mon, 13 May 2024 at 15:55, Pavel Borisov <pashkin.elfe@gmail.com> wrote:Hi, Alexander!On Mon, 13 May 2024 at 05:42, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > The only bt_target_page_check() caller is\n> > > bt_check_level_from_leftmost(), which overrides state->target in the\n> > > next iteration anyway.  I think the patch is just refactoring to\n> > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> >\n> > I find your argument unconvincing.\n> >\n> > After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target.  See line 963.\n> >\n> > I'm left with four possibilities:\n> >\n> >\n> > 1)  bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n> >\n> > 2)  The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963.  The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n> >\n> > 3)  The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> >\n> > 4)  It's been broken all along and your patch just changes from wrong to wrong.\n> >\n> >\n> > If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it.  Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is.  I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other.  I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n> >\n> > If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix.  Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n> >\n> > If (3) is true, then I'm complaining that the patch is flat busted.\n> >\n> > If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n> >\n> > Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n> >\n> >\n> > For reference, I said something similar earlier today in another email to this thread:\n> >\n> > This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did.  That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch.  Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used.  So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else?  Is this a bug being introduced, being fixed, or ... ?\n>\n> Thank you for your analysis.  I'm inclined to believe in 2, but not\n> yet completely sure.  It's really pity that our tests don't cover\n> this.  I'm investigating this area.\n\nIt seems that I got to the bottom of this.  Changing\nBtreeCheckState.target for a cross-page unique constraint check is\nwrong, but that happens only for leaf pages.  After that\nBtreeCheckState.target is only used for setting the low key.  The low\nkey is only used for non-leaf pages.  So, that didn't lead to any\nvisible bug.  I've revised the commit message to reflect this. I agree with your analysis regarding state->target:- when the unique check is on, state->target was reassigned only for the leaf pages (under P_ISLEAF(topaque) in bt_target_page_check).- in this level (leaf) in bt_check_level_from_leftmost() this value of state->target was used to get state->lowkey. Then it was reset (in the next iteration of do loop in in bt_check_level_from_leftmost() - state->lowkey lives until the end of pages level (leaf) iteration cycle. Then, low-key is reset (state->lowkey = NULL in the end of  bt_check_level_from_leftmost())- state->lowkey is used only in bt_child_check/bt_child_highkey_check. Both are called only from non-leaf pages iteration cycles (under P_ISLEAF(topaque))- Also there is a check (rightblock_number != P_NONE) in before getting rightpage into state->target in bt_target_page_check() that ensures us that rightpage indeed exists and getting this (unused) lowkey in bt_check_level_from_leftmost will not invoke any page reading errors.I'm pretty sure that there was no bug in this, not just the bug was hidden.Indeed re-assigning state->target in leaf page iteration for cross-page unique check was not beautiful, and Peter pointed out this. In my opinion the patch 0003 is a pure code refactoring. As for the cross-page check regression/TAP testing, this test had problems since the btree page layout is not fixed (especially it's different on 32-bit arch). I had a variant for testing cross-page check when the test was yet regression one upthread for both 32/64 bit architectures. I remember it was decided not to include it due to complications and low impact for testing the corner case of very rare cross-page duplicates. (There were also suggestions to drop cross-page duplicates check at all, which I didn't agree 2 years ago, but still it can make sense)Separately, I propose to avoid getting state->lowkey for leaf pages at all as it's unused. PFA is a simple patch for this. (I don't add it to the current patch set as I believe it has nothing to do with UNIQUE constraint check, rather it improves the previous btree amcheck code)A correction of a typo in previous message:non-leaf pages iteration cycles (under !P_ISLEAF(topaque)) -> non-leaf pages iteration cycles (under !P_ISLEAF(topaque))", "msg_date": "Mon, 13 May 2024 16:19:54 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "A correction of a typo in previous message:\nnon-leaf pages iteration cycles (under P_ISLEAF(topaque)) -> non-leaf pages\niteration cycles (under !P_ISLEAF(topaque))\n\nOn Mon, 13 May 2024 at 16:19, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n\n>\n>\n> On Mon, 13 May 2024 at 15:55, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n>\n>> Hi, Alexander!\n>>\n>> On Mon, 13 May 2024 at 05:42, Alexander Korotkov <aekorotkov@gmail.com>\n>> wrote:\n>>\n>>> On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n>>> <aekorotkov@gmail.com> wrote:\n>>> > On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n>>> > <mark.dilger@enterprisedb.com> wrote:\n>>> > > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <\n>>> aekorotkov@gmail.com> wrote:\n>>> > > > The only bt_target_page_check() caller is\n>>> > > > bt_check_level_from_leftmost(), which overrides state->target in\n>>> the\n>>> > > > next iteration anyway. I think the patch is just refactoring to\n>>> > > > eliminate the confusion pointer by Peter Geoghegan upthread.\n>>> > >\n>>> > > I find your argument unconvincing.\n>>> > >\n>>> > > After bt_target_page_check() returns at line 919, and before\n>>> bt_check_level_from_leftmost() overrides state->target in the next\n>>> iteration, bt_check_level_from_leftmost() conditionally fetches an item\n>>> from the page referenced by state->target. See line 963.\n>>> > >\n>>> > > I'm left with four possibilities:\n>>> > >\n>>> > >\n>>> > > 1) bt_target_page_check() never gets to the code that uses\n>>> \"rightpage\" rather than \"state->target\" in the same iteration where\n>>> bt_check_level_from_leftmost() conditionally fetches an item from\n>>> state->target, so the change you're making doesn't matter.\n>>> > >\n>>> > > 2) The code prior to v2-0003 was wrong, having changed\n>>> state->target in an inappropriate way, causing the wrong thing to happen at\n>>> what is now line 963. The patch fixes the bug, because state->target no\n>>> longer gets overwritten where you are now using \"rightpage\" for the value.\n>>> > >\n>>> > > 3) The code used to work, having set up state->target correctly in\n>>> the place where you are now using \"rightpage\", but v2-0003 has broken that.\n>>> > >\n>>> > > 4) It's been broken all along and your patch just changes from\n>>> wrong to wrong.\n>>> > >\n>>> > >\n>>> > > If you believe (1) is true, then I'm complaining that you are\n>>> relying far to much on action at a distance, and that you are not\n>>> documenting it. Even with documentation of this interrelationship, I'd be\n>>> unhappy with how brittle the code is. I cannot easily discern that the two\n>>> don't ever happen in the same iteration, and I'm not at all convinced one\n>>> way or the other. I tried to set up some Asserts about that, but none of\n>>> the test cases actually reach the new code, so adding Asserts doesn't help\n>>> to investigate the question.\n>>> > >\n>>> > > If (2) is true, then I'm complaining that the commit message doesn't\n>>> mention the fact that this is a bug fix. Bug fixes should be clearly\n>>> documented as such, otherwise future work might assume the commit can be\n>>> reverted with only stylistic consequences.\n>>> > >\n>>> > > If (3) is true, then I'm complaining that the patch is flat busted.\n>>> > >\n>>> > > If (4) is true, then maybe we should revert the entire feature, or\n>>> have a discussion of mitigation efforts that are needed.\n>>> > >\n>>> > > Regardless of which of 1..4 you pick, I think it could all do with\n>>> more regression test coverage.\n>>> > >\n>>> > >\n>>> > > For reference, I said something similar earlier today in another\n>>> email to this thread:\n>>> > >\n>>> > > This patch introduces a change that stores a new page into variable\n>>> \"rightpage\" rather than overwriting \"state->target\", which the old\n>>> implementation most certainly did. That means that after returning from\n>>> bt_target_page_check() into the calling function\n>>> bt_check_level_from_leftmost() the value in state->target is not what it\n>>> would have been prior to this patch. Now, that'd be irrelevant if nobody\n>>> goes on to consult that value, but just 44 lines further down in\n>>> bt_check_level_from_leftmost() state->target is clearly used. So the\n>>> behavior at that point is changing between the old and new versions of the\n>>> code, and I think I'm within reason to ask if it was wrong before the\n>>> patch, wrong after the patch, or something else? Is this a bug being\n>>> introduced, being fixed, or ... ?\n>>> >\n>>> > Thank you for your analysis. I'm inclined to believe in 2, but not\n>>> > yet completely sure. It's really pity that our tests don't cover\n>>> > this. I'm investigating this area.\n>>>\n>>> It seems that I got to the bottom of this. Changing\n>>> BtreeCheckState.target for a cross-page unique constraint check is\n>>> wrong, but that happens only for leaf pages. After that\n>>> BtreeCheckState.target is only used for setting the low key. The low\n>>> key is only used for non-leaf pages. So, that didn't lead to any\n>>> visible bug. I've revised the commit message to reflect this.\n>>>\n>>\n>> I agree with your analysis regarding state->target:\n>> - when the unique check is on, state->target was reassigned only for the\n>> leaf pages (under P_ISLEAF(topaque) in bt_target_page_check).\n>> - in this level (leaf) in bt_check_level_from_leftmost() this value of\n>> state->target was used to get state->lowkey. Then it was reset (in the next\n>> iteration of do loop in in bt_check_level_from_leftmost()\n>> - state->lowkey lives until the end of pages level (leaf) iteration\n>> cycle. Then, low-key is reset (state->lowkey = NULL in the end of\n>> bt_check_level_from_leftmost())\n>> - state->lowkey is used only in bt_child_check/bt_child_highkey_check.\n>> Both are called only from non-leaf pages iteration cycles (under\n>> P_ISLEAF(topaque))\n>> - Also there is a check (rightblock_number != P_NONE) in before getting\n>> rightpage into state->target in bt_target_page_check() that ensures us that\n>> rightpage indeed exists and getting this (unused) lowkey in\n>> bt_check_level_from_leftmost will not invoke any page reading errors.\n>>\n>> I'm pretty sure that there was no bug in this, not just the bug was\n>> hidden.\n>>\n>> Indeed re-assigning state->target in leaf page iteration for cross-page\n>> unique check was not beautiful, and Peter pointed out this. In my opinion\n>> the patch 0003 is a pure code refactoring.\n>>\n>> As for the cross-page check regression/TAP testing, this test had\n>> problems since the btree page layout is not fixed (especially it's\n>> different on 32-bit arch). I had a variant for testing cross-page check\n>> when the test was yet regression one upthread for both 32/64 bit\n>> architectures. I remember it was decided not to include it due to\n>> complications and low impact for testing the corner case of very rare\n>> cross-page duplicates. (There were also suggestions to drop cross-page\n>> duplicates check at all, which I didn't agree 2 years ago, but still it can\n>> make sense)\n>>\n>> Separately, I propose to avoid getting state->lowkey for leaf pages at\n>> all as it's unused. PFA is a simple patch for this. (I don't add it to the\n>> current patch set as I believe it has nothing to do with UNIQUE constraint\n>> check, rather it improves the previous btree amcheck code)\n>>\n>\n> A correction of a typo in previous message:\n> non-leaf pages iteration cycles (under !P_ISLEAF(topaque)) -> non-leaf\n> pages iteration cycles (under !P_ISLEAF(topaque))\n>\n\nA correction of a typo in previous message:non-leaf pages iteration cycles (under P_ISLEAF(topaque)) -> non-leaf pages iteration cycles (under !P_ISLEAF(topaque))On Mon, 13 May 2024 at 16:19, Pavel Borisov <pashkin.elfe@gmail.com> wrote:On Mon, 13 May 2024 at 15:55, Pavel Borisov <pashkin.elfe@gmail.com> wrote:Hi, Alexander!On Mon, 13 May 2024 at 05:42, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > The only bt_target_page_check() caller is\n> > > bt_check_level_from_leftmost(), which overrides state->target in the\n> > > next iteration anyway.  I think the patch is just refactoring to\n> > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> >\n> > I find your argument unconvincing.\n> >\n> > After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target.  See line 963.\n> >\n> > I'm left with four possibilities:\n> >\n> >\n> > 1)  bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n> >\n> > 2)  The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963.  The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n> >\n> > 3)  The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> >\n> > 4)  It's been broken all along and your patch just changes from wrong to wrong.\n> >\n> >\n> > If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it.  Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is.  I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other.  I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n> >\n> > If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix.  Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n> >\n> > If (3) is true, then I'm complaining that the patch is flat busted.\n> >\n> > If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n> >\n> > Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n> >\n> >\n> > For reference, I said something similar earlier today in another email to this thread:\n> >\n> > This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did.  That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch.  Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used.  So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else?  Is this a bug being introduced, being fixed, or ... ?\n>\n> Thank you for your analysis.  I'm inclined to believe in 2, but not\n> yet completely sure.  It's really pity that our tests don't cover\n> this.  I'm investigating this area.\n\nIt seems that I got to the bottom of this.  Changing\nBtreeCheckState.target for a cross-page unique constraint check is\nwrong, but that happens only for leaf pages.  After that\nBtreeCheckState.target is only used for setting the low key.  The low\nkey is only used for non-leaf pages.  So, that didn't lead to any\nvisible bug.  I've revised the commit message to reflect this. I agree with your analysis regarding state->target:- when the unique check is on, state->target was reassigned only for the leaf pages (under P_ISLEAF(topaque) in bt_target_page_check).- in this level (leaf) in bt_check_level_from_leftmost() this value of state->target was used to get state->lowkey. Then it was reset (in the next iteration of do loop in in bt_check_level_from_leftmost() - state->lowkey lives until the end of pages level (leaf) iteration cycle. Then, low-key is reset (state->lowkey = NULL in the end of  bt_check_level_from_leftmost())- state->lowkey is used only in bt_child_check/bt_child_highkey_check. Both are called only from non-leaf pages iteration cycles (under P_ISLEAF(topaque))- Also there is a check (rightblock_number != P_NONE) in before getting rightpage into state->target in bt_target_page_check() that ensures us that rightpage indeed exists and getting this (unused) lowkey in bt_check_level_from_leftmost will not invoke any page reading errors.I'm pretty sure that there was no bug in this, not just the bug was hidden.Indeed re-assigning state->target in leaf page iteration for cross-page unique check was not beautiful, and Peter pointed out this. In my opinion the patch 0003 is a pure code refactoring. As for the cross-page check regression/TAP testing, this test had problems since the btree page layout is not fixed (especially it's different on 32-bit arch). I had a variant for testing cross-page check when the test was yet regression one upthread for both 32/64 bit architectures. I remember it was decided not to include it due to complications and low impact for testing the corner case of very rare cross-page duplicates. (There were also suggestions to drop cross-page duplicates check at all, which I didn't agree 2 years ago, but still it can make sense)Separately, I propose to avoid getting state->lowkey for leaf pages at all as it's unused. PFA is a simple patch for this. (I don't add it to the current patch set as I believe it has nothing to do with UNIQUE constraint check, rather it improves the previous btree amcheck code)A correction of a typo in previous message:non-leaf pages iteration cycles (under !P_ISLEAF(topaque)) -> non-leaf pages iteration cycles (under !P_ISLEAF(topaque))", "msg_date": "Mon, 13 May 2024 16:20:25 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Mon, May 13, 2024 at 4:42 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> > > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > > The only bt_target_page_check() caller is\n> > > > bt_check_level_from_leftmost(), which overrides state->target in the\n> > > > next iteration anyway. I think the patch is just refactoring to\n> > > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> > >\n> > > I find your argument unconvincing.\n> > >\n> > > After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target. See line 963.\n> > >\n> > > I'm left with four possibilities:\n> > >\n> > >\n> > > 1) bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n> > >\n> > > 2) The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963. The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n> > >\n> > > 3) The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> > >\n> > > 4) It's been broken all along and your patch just changes from wrong to wrong.\n> > >\n> > >\n> > > If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it. Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is. I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other. I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n> > >\n> > > If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix. Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n> > >\n> > > If (3) is true, then I'm complaining that the patch is flat busted.\n> > >\n> > > If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n> > >\n> > > Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n> > >\n> > >\n> > > For reference, I said something similar earlier today in another email to this thread:\n> > >\n> > > This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did. That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch. Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used. So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else? Is this a bug being introduced, being fixed, or ... ?\n> >\n> > Thank you for your analysis. I'm inclined to believe in 2, but not\n> > yet completely sure. It's really pity that our tests don't cover\n> > this. I'm investigating this area.\n>\n> It seems that I got to the bottom of this. Changing\n> BtreeCheckState.target for a cross-page unique constraint check is\n> wrong, but that happens only for leaf pages. After that\n> BtreeCheckState.target is only used for setting the low key. The low\n> key is only used for non-leaf pages. So, that didn't lead to any\n> visible bug. I've revised the commit message to reflect this.\n>\n> So, the picture for the patches is the following now.\n> 0001 – optimization, but rather simple and giving huge effect\n> 0002 – refactoring\n> 0003 – fix for the bug\n> 0004 – better error reporting\n\nI think the thread contains enough motivation on why 0002, 0003 and\n0004 are material for post-FF. They are fixes and refactoring for\nnew-in-v17 feature. I'm going to push them if no objections.\n\nRegarding 0001, I'd like to ask Tom and Mark if they find convincing\nthat given that optimization is small, simple and giving huge effect,\nit could be pushed post-FF? Otherwise, this could wait for v18.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 17 May 2024 13:11:32 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Alexander!\n\nOn Fri, 17 May 2024 at 14:11, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Mon, May 13, 2024 at 4:42 AM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n> > <aekorotkov@gmail.com> wrote:\n> > > On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> > > <mark.dilger@enterprisedb.com> wrote:\n> > > > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <\n> aekorotkov@gmail.com> wrote:\n> > > > > The only bt_target_page_check() caller is\n> > > > > bt_check_level_from_leftmost(), which overrides state->target in\n> the\n> > > > > next iteration anyway. I think the patch is just refactoring to\n> > > > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> > > >\n> > > > I find your argument unconvincing.\n> > > >\n> > > > After bt_target_page_check() returns at line 919, and before\n> bt_check_level_from_leftmost() overrides state->target in the next\n> iteration, bt_check_level_from_leftmost() conditionally fetches an item\n> from the page referenced by state->target. See line 963.\n> > > >\n> > > > I'm left with four possibilities:\n> > > >\n> > > >\n> > > > 1) bt_target_page_check() never gets to the code that uses\n> \"rightpage\" rather than \"state->target\" in the same iteration where\n> bt_check_level_from_leftmost() conditionally fetches an item from\n> state->target, so the change you're making doesn't matter.\n> > > >\n> > > > 2) The code prior to v2-0003 was wrong, having changed\n> state->target in an inappropriate way, causing the wrong thing to happen at\n> what is now line 963. The patch fixes the bug, because state->target no\n> longer gets overwritten where you are now using \"rightpage\" for the value.\n> > > >\n> > > > 3) The code used to work, having set up state->target correctly in\n> the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> > > >\n> > > > 4) It's been broken all along and your patch just changes from\n> wrong to wrong.\n> > > >\n> > > >\n> > > > If you believe (1) is true, then I'm complaining that you are\n> relying far to much on action at a distance, and that you are not\n> documenting it. Even with documentation of this interrelationship, I'd be\n> unhappy with how brittle the code is. I cannot easily discern that the two\n> don't ever happen in the same iteration, and I'm not at all convinced one\n> way or the other. I tried to set up some Asserts about that, but none of\n> the test cases actually reach the new code, so adding Asserts doesn't help\n> to investigate the question.\n> > > >\n> > > > If (2) is true, then I'm complaining that the commit message doesn't\n> mention the fact that this is a bug fix. Bug fixes should be clearly\n> documented as such, otherwise future work might assume the commit can be\n> reverted with only stylistic consequences.\n> > > >\n> > > > If (3) is true, then I'm complaining that the patch is flat busted.\n> > > >\n> > > > If (4) is true, then maybe we should revert the entire feature, or\n> have a discussion of mitigation efforts that are needed.\n> > > >\n> > > > Regardless of which of 1..4 you pick, I think it could all do with\n> more regression test coverage.\n> > > >\n> > > >\n> > > > For reference, I said something similar earlier today in another\n> email to this thread:\n> > > >\n> > > > This patch introduces a change that stores a new page into variable\n> \"rightpage\" rather than overwriting \"state->target\", which the old\n> implementation most certainly did. That means that after returning from\n> bt_target_page_check() into the calling function\n> bt_check_level_from_leftmost() the value in state->target is not what it\n> would have been prior to this patch. Now, that'd be irrelevant if nobody\n> goes on to consult that value, but just 44 lines further down in\n> bt_check_level_from_leftmost() state->target is clearly used. So the\n> behavior at that point is changing between the old and new versions of the\n> code, and I think I'm within reason to ask if it was wrong before the\n> patch, wrong after the patch, or something else? Is this a bug being\n> introduced, being fixed, or ... ?\n> > >\n> > > Thank you for your analysis. I'm inclined to believe in 2, but not\n> > > yet completely sure. It's really pity that our tests don't cover\n> > > this. I'm investigating this area.\n> >\n> > It seems that I got to the bottom of this. Changing\n> > BtreeCheckState.target for a cross-page unique constraint check is\n> > wrong, but that happens only for leaf pages. After that\n> > BtreeCheckState.target is only used for setting the low key. The low\n> > key is only used for non-leaf pages. So, that didn't lead to any\n> > visible bug. I've revised the commit message to reflect this.\n> >\n> > So, the picture for the patches is the following now.\n> > 0001 – optimization, but rather simple and giving huge effect\n> > 0002 – refactoring\n> > 0003 – fix for the bug\n> > 0004 – better error reporting\n>\n> I think the thread contains enough motivation on why 0002, 0003 and\n> 0004 are material for post-FF. They are fixes and refactoring for\n> new-in-v17 feature. I'm going to push them if no objections.\n>\n> Regarding 0001, I'd like to ask Tom and Mark if they find convincing\n> that given that optimization is small, simple and giving huge effect,\n> it could be pushed post-FF? Otherwise, this could wait for v18.\n>\n\nIn my view, patches 0002-0004 are worth pushing.\n0001 is ready in my view. But I see no problem pushing it into v18\nregarding that this optimization could be not eligible for post-FF. I don't\nknow the criteria for this just let's be safe about it.\n\nRegards,\nPavel Borisov\n\nHi, Alexander!On Fri, 17 May 2024 at 14:11, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Mon, May 13, 2024 at 4:42 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n> > <mark.dilger@enterprisedb.com> wrote:\n> > > > On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > > The only bt_target_page_check() caller is\n> > > > bt_check_level_from_leftmost(), which overrides state->target in the\n> > > > next iteration anyway.  I think the patch is just refactoring to\n> > > > eliminate the confusion pointer by Peter Geoghegan upthread.\n> > >\n> > > I find your argument unconvincing.\n> > >\n> > > After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target.  See line 963.\n> > >\n> > > I'm left with four possibilities:\n> > >\n> > >\n> > > 1)  bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n> > >\n> > > 2)  The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963.  The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n> > >\n> > > 3)  The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n> > >\n> > > 4)  It's been broken all along and your patch just changes from wrong to wrong.\n> > >\n> > >\n> > > If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it.  Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is.  I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other.  I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n> > >\n> > > If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix.  Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n> > >\n> > > If (3) is true, then I'm complaining that the patch is flat busted.\n> > >\n> > > If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n> > >\n> > > Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n> > >\n> > >\n> > > For reference, I said something similar earlier today in another email to this thread:\n> > >\n> > > This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did.  That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch.  Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used.  So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else?  Is this a bug being introduced, being fixed, or ... ?\n> >\n> > Thank you for your analysis.  I'm inclined to believe in 2, but not\n> > yet completely sure.  It's really pity that our tests don't cover\n> > this.  I'm investigating this area.\n>\n> It seems that I got to the bottom of this.  Changing\n> BtreeCheckState.target for a cross-page unique constraint check is\n> wrong, but that happens only for leaf pages.  After that\n> BtreeCheckState.target is only used for setting the low key.  The low\n> key is only used for non-leaf pages.  So, that didn't lead to any\n> visible bug.  I've revised the commit message to reflect this.\n>\n> So, the picture for the patches is the following now.\n> 0001 – optimization, but rather simple and giving huge effect\n> 0002 – refactoring\n> 0003 – fix for the bug\n> 0004 – better error reporting\n\nI think the thread contains enough motivation on why 0002, 0003 and\n0004 are material for post-FF.  They are fixes and refactoring for\nnew-in-v17 feature.  I'm going to push them if no objections.\n\nRegarding 0001, I'd like to ask Tom and Mark if they find convincing\nthat given that optimization is small, simple and giving huge effect,\nit could be pushed post-FF?  Otherwise, this could wait for v18.In my view, patches 0002-0004 are worth pushing.0001 is ready in my view. But I see no problem pushing it into v18 regarding that this optimization could be not eligible for post-FF. I don't know the criteria for this just let's be safe about it.Regards,Pavel Borisov", "msg_date": "Fri, 17 May 2024 15:09:04 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "> On May 17, 2024, at 3:11 AM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> \n> On Mon, May 13, 2024 at 4:42 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> On Mon, May 13, 2024 at 12:23 AM Alexander Korotkov\n>> <aekorotkov@gmail.com> wrote:\n>>> On Sat, May 11, 2024 at 4:13 AM Mark Dilger\n>>> <mark.dilger@enterprisedb.com> wrote:\n>>>>> On May 10, 2024, at 12:05 PM, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>>>> The only bt_target_page_check() caller is\n>>>>> bt_check_level_from_leftmost(), which overrides state->target in the\n>>>>> next iteration anyway. I think the patch is just refactoring to\n>>>>> eliminate the confusion pointer by Peter Geoghegan upthread.\n>>>> \n>>>> I find your argument unconvincing.\n>>>> \n>>>> After bt_target_page_check() returns at line 919, and before bt_check_level_from_leftmost() overrides state->target in the next iteration, bt_check_level_from_leftmost() conditionally fetches an item from the page referenced by state->target. See line 963.\n>>>> \n>>>> I'm left with four possibilities:\n>>>> \n>>>> \n>>>> 1) bt_target_page_check() never gets to the code that uses \"rightpage\" rather than \"state->target\" in the same iteration where bt_check_level_from_leftmost() conditionally fetches an item from state->target, so the change you're making doesn't matter.\n>>>> \n>>>> 2) The code prior to v2-0003 was wrong, having changed state->target in an inappropriate way, causing the wrong thing to happen at what is now line 963. The patch fixes the bug, because state->target no longer gets overwritten where you are now using \"rightpage\" for the value.\n>>>> \n>>>> 3) The code used to work, having set up state->target correctly in the place where you are now using \"rightpage\", but v2-0003 has broken that.\n>>>> \n>>>> 4) It's been broken all along and your patch just changes from wrong to wrong.\n>>>> \n>>>> \n>>>> If you believe (1) is true, then I'm complaining that you are relying far to much on action at a distance, and that you are not documenting it. Even with documentation of this interrelationship, I'd be unhappy with how brittle the code is. I cannot easily discern that the two don't ever happen in the same iteration, and I'm not at all convinced one way or the other. I tried to set up some Asserts about that, but none of the test cases actually reach the new code, so adding Asserts doesn't help to investigate the question.\n>>>> \n>>>> If (2) is true, then I'm complaining that the commit message doesn't mention the fact that this is a bug fix. Bug fixes should be clearly documented as such, otherwise future work might assume the commit can be reverted with only stylistic consequences.\n>>>> \n>>>> If (3) is true, then I'm complaining that the patch is flat busted.\n>>>> \n>>>> If (4) is true, then maybe we should revert the entire feature, or have a discussion of mitigation efforts that are needed.\n>>>> \n>>>> Regardless of which of 1..4 you pick, I think it could all do with more regression test coverage.\n>>>> \n>>>> \n>>>> For reference, I said something similar earlier today in another email to this thread:\n>>>> \n>>>> This patch introduces a change that stores a new page into variable \"rightpage\" rather than overwriting \"state->target\", which the old implementation most certainly did. That means that after returning from bt_target_page_check() into the calling function bt_check_level_from_leftmost() the value in state->target is not what it would have been prior to this patch. Now, that'd be irrelevant if nobody goes on to consult that value, but just 44 lines further down in bt_check_level_from_leftmost() state->target is clearly used. So the behavior at that point is changing between the old and new versions of the code, and I think I'm within reason to ask if it was wrong before the patch, wrong after the patch, or something else? Is this a bug being introduced, being fixed, or ... ?\n>>> \n>>> Thank you for your analysis. I'm inclined to believe in 2, but not\n>>> yet completely sure. It's really pity that our tests don't cover\n>>> this. I'm investigating this area.\n>> \n>> It seems that I got to the bottom of this. Changing\n>> BtreeCheckState.target for a cross-page unique constraint check is\n>> wrong, but that happens only for leaf pages. After that\n>> BtreeCheckState.target is only used for setting the low key. The low\n>> key is only used for non-leaf pages. So, that didn't lead to any\n>> visible bug. I've revised the commit message to reflect this.\n>> \n>> So, the picture for the patches is the following now.\n>> 0001 – optimization, but rather simple and giving huge effect\n>> 0002 – refactoring\n>> 0003 – fix for the bug\n>> 0004 – better error reporting\n> \n> I think the thread contains enough motivation on why 0002, 0003 and\n> 0004 are material for post-FF. They are fixes and refactoring for\n> new-in-v17 feature. I'm going to push them if no objections.\n> \n> Regarding 0001, I'd like to ask Tom and Mark if they find convincing\n> that given that optimization is small, simple and giving huge effect,\n> it could be pushed post-FF? Otherwise, this could wait for v18.\n\nI won't pretend to be part of the Release Management Team. Perhaps Tom wishes to respond.\n\n\n\nI wrote a TAP test to check the uniqueness checker. bt_index_check() sometimes fails to detect a corruption. This is true both before and after applying v3-0001. The bt_index_parent_check() seems to always detect the corruption created by the TAP test. Likewise, this is true both before and after applying v3-0001.\n\nThe documentation in https://www.postgresql.org/docs/devel/amcheck.html#AMCHECK-FUNCTIONS is ambiguous:\n\n\"bt_index_check does not verify invariants that span child/parent relationships, but will verify the presence of all heap tuples as index tuples within the index when heapallindexed is true. When checkunique is true bt_index_check will check that no more than one among duplicate entries in unique index is visible. When a routine, lightweight test for corruption is required in a live production environment, using bt_index_check often provides the best trade-off between thoroughness of verification and limiting the impact on application performance and availability.\"\n\nThe second sentence, \"When checkunique is true bt_index_check will check that no more than one among duplicate entries in unique index is visible.\" is not strictly true, as it won't check if the violation spans a page boundary. That's implied by the surrounding sentences, but I'm not sure a reader can be trusted to know which way to interpret how \"checkunique\" works. Clarification is needed.\n\n\n\nThe attached TAP test is not intended for commit. I am only including it here because you might want to use the TAP test as a starting point for creating and testing for new kinds of corruption. Beware the test intentionally includes an infinite loop, which is helpful for a developer examining the code, but not at all appropriate otherwise. It loads all blocks of the index into memory each loop, which could be made more efficient if we wanted this to be part of the core codebase. I just threw it together this morning. It's not polished, documented, checked for portability, or otherwise production quality.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 17 May 2024 10:41:19 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Mark!\n\n> The documentation in\n> https://www.postgresql.org/docs/devel/amcheck.html#AMCHECK-FUNCTIONS is\n> ambiguous:\n>\n> \"bt_index_check does not verify invariants that span child/parent\n> relationships, but will verify the presence of all heap tuples as index\n> tuples within the index when heapallindexed is true. When checkunique is\n> true bt_index_check will check that no more than one among duplicate\n> entries in unique index is visible. When a routine, lightweight test for\n> corruption is required in a live production environment, using\n> bt_index_check often provides the best trade-off between thoroughness of\n> verification and limiting the impact on application performance and\n> availability.\"\n>\n> The second sentence, \"When checkunique is true bt_index_check will check\n> that no more than one among duplicate entries in unique index is visible.\"\n> is not strictly true, as it won't check if the violation spans a page\n> boundary.\n>\nAmcheck with checkunique option does check uniqueness violation between\npages. But it doesn't warranty detection of cross page uniqueness\nviolations in extremely rare cases when the first equal index entry on the\nnext page corresponds to tuple that is not visible (e.g. dead). In this, I\nfollowed the Peter's notion [1] that checking across a number of dead equal\nentries that could theoretically span even across many pages is an\nunneeded code complication and amcheck is not a tool that provides any\nwarranty when checking an index.\n\nI'm not against docs modification in any way that clarifies its exact usage\nand limitations.\n\nKind regards,\nPavel Borisov\n\n[1]\nhttps://www.postgresql.org/message-id/CAH2-Wz%3DttG__BTZ-r5ccopBRb5evjg%3DzsF_o_3C5h4zRBA_LjQ%40mail.gmail.com\n\nHi, Mark!\nThe documentation in https://www.postgresql.org/docs/devel/amcheck.html#AMCHECK-FUNCTIONS is ambiguous:\n\n\"bt_index_check does not verify invariants that span child/parent relationships, but will verify the presence of all heap tuples as index tuples within the index when heapallindexed is true. When checkunique is true bt_index_check will check that no more than one among duplicate entries in unique index is visible. When a routine, lightweight test for corruption is required in a live production environment, using bt_index_check often provides the best trade-off between thoroughness of verification and limiting the impact on application performance and availability.\"\n\nThe second sentence, \"When checkunique is true bt_index_check will check that no more than one among duplicate entries in unique index is visible.\" is not strictly true, as it won't check if the violation spans a page boundary.  Amcheck with checkunique option does check uniqueness violation between pages. But it doesn't warranty detection of cross page uniqueness violations in extremely rare cases when the first equal index entry on the next page corresponds to tuple that is not visible (e.g. dead). In this, I followed the Peter's notion [1] that checking across a number of dead equal entries that could theoretically span even across many pages is an unneeded code complication and amcheck is not a tool that provides any warranty when checking an index.I'm not against docs modification in any way that clarifies its exact usage and limitations.Kind regards,Pavel Borisov[1] https://www.postgresql.org/message-id/CAH2-Wz%3DttG__BTZ-r5ccopBRb5evjg%3DzsF_o_3C5h4zRBA_LjQ%40mail.gmail.com", "msg_date": "Fri, 17 May 2024 22:51:38 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 17, 2024, at 11:51 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Amcheck with checkunique option does check uniqueness violation between pages. But it doesn't warranty detection of cross page uniqueness violations in extremely rare cases when the first equal index entry on the next page corresponds to tuple that is not visible (e.g. dead). In this, I followed the Peter's notion [1] that checking across a number of dead equal entries that could theoretically span even across many pages is an unneeded code complication and amcheck is not a tool that provides any warranty when checking an index.\n\nThis confuses me a bit. The regression test creates a table and index but never performs any DELETE nor any UPDATE operations, so none of the index entries should be dead. If I am understanding you correct, I'd be forced to conclude that the uniqueness checking code is broken. Can you take a look?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 17 May 2024 12:10:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 17, 2024, at 12:10 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> Amcheck with checkunique option does check uniqueness violation between pages. But it doesn't warranty detection of cross page uniqueness violations in extremely rare cases when the first equal index entry on the next page corresponds to tuple that is not visible (e.g. dead). In this, I followed the Peter's notion [1] that checking across a number of dead equal entries that could theoretically span even across many pages is an unneeded code complication and amcheck is not a tool that provides any warranty when checking an index.\n> \n> This confuses me a bit. The regression test creates a table and index but never performs any DELETE nor any UPDATE operations, so none of the index entries should be dead. If I am understanding you correct, I'd be forced to conclude that the uniqueness checking code is broken. Can you take a look?\n\nOn further review, the test was not anticipating the error message \"high key invariant violated for index\". That wasn't seen in calls to bt_index_parent_check(), but appears as one of the errors from bt_index_check(). I am rerunning the test now....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 17 May 2024 12:42:26 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Mark!\n\nOn Fri, 17 May 2024 at 23:10, Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On May 17, 2024, at 11:51 AM, Pavel Borisov <pashkin.elfe@gmail.com>\n> wrote:\n> >\n> > Amcheck with checkunique option does check uniqueness violation between\n> pages. But it doesn't warranty detection of cross page uniqueness\n> violations in extremely rare cases when the first equal index entry on the\n> next page corresponds to tuple that is not visible (e.g. dead). In this, I\n> followed the Peter's notion [1] that checking across a number of dead equal\n> entries that could theoretically span even across many pages is an unneeded\n> code complication and amcheck is not a tool that provides any warranty when\n> checking an index.\n>\n> This confuses me a bit. The regression test creates a table and index but\n> never performs any DELETE nor any UPDATE operations, so none of the index\n> entries should be dead. If I am understanding you correct, I'd be forced\n> to conclude that the uniqueness checking code is broken. Can you take a\n> look?\n>\nAt the first glance it's not clear to me:\n- why your test creates cross-page unique constraint violations?\n- how do you know they are not detected?\n\nHi, Mark!On Fri, 17 May 2024 at 23:10, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On May 17, 2024, at 11:51 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> Amcheck with checkunique option does check uniqueness violation between pages. But it doesn't warranty detection of cross page uniqueness violations in extremely rare cases when the first equal index entry on the next page corresponds to tuple that is not visible (e.g. dead). In this, I followed the Peter's notion [1] that checking across a number of dead equal entries that could theoretically span even across many pages is an unneeded code complication and amcheck is not a tool that provides any warranty when checking an index.\n\nThis confuses me a bit.  The regression test creates a table and index but never performs any DELETE nor any UPDATE operations, so none of the index entries should be dead.  If I am understanding you correct, I'd be forced to conclude that the uniqueness checking code is broken.  Can you take a look?At the first glance it's not clear to me: - why your test creates cross-page unique constraint violations?- how do you know they are not detected?", "msg_date": "Fri, 17 May 2024 23:42:42 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, May 17, 2024 at 3:42 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> On further review, the test was not anticipating the error message \"high key invariant violated for index\". That wasn't seen in calls to bt_index_parent_check(), but appears as one of the errors from bt_index_check(). I am rerunning the test now....\n\nMany different parts of the B-Tree code will fight against allowing\nduplicates of the same value to span multiple leaf pages -- this is\nespecially true for unique indexes. For example, nbtsplitloc.c has a\nvariety of strategies that will prevent choosing a split point that\nnecessitates including a distinguishing heap TID in the new high key.\nIn other words, nbtsplitloc.c is very aggressive about picking a split\npoint between (rather than within) groups of duplicates.\n\nOf course it's still *possible* for a unique index to have multiple\nleaf pages containing the same individual value. The regression tests\ndo have coverage for certain relevant code paths (e.g., there is\ncoverage for code paths only hit when _bt_check_unique has to go to\nthe page to the right). This is only the case because I went out of my\nway to make sure of it, by adding tests that allow a huge number of\nversion duplicates to accumulate within a unique index. (The \"move\nright\" _bt_check_unique branches had zero test coverage for a year or\ntwo.)\n\nJust how important it is that amcheck covers cases where the version\nduplicates span multiple leaf pages is of course debatable -- it's\nalways better to be more thorough, when practical. But it's certainly\nsomething that needs to be assessed based on the merits.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 May 2024 16:00:55 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 17, 2024, at 12:42 PM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> At the first glance it's not clear to me: \n> - why your test creates cross-page unique constraint violations?\n\nTo see if they are detected.\n\n> - how do you know they are not detected?\n\nIt appears that they are detected. At least, rerunning the test after adjusting the expected output, I no longer see problems.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 17 May 2024 13:08:10 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "\n\n> On May 17, 2024, at 1:00 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> Many different parts of the B-Tree code will fight against allowing\n> duplicates of the same value to span multiple leaf pages -- this is\n> especially true for unique indexes. \n\nThe quick-and-dirty TAP test I wrote this morning is intended to introduce duplicates across page boundaries, not to test for ones that got there by normal database activity. In other words, the TAP test forcibly corrupts the index by changing a value on one side of a boundary to be equal to the value on the other side of the boundary. Prior to the corrupting action the values were all unique.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Fri, 17 May 2024 13:10:46 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, May 17, 2024 at 4:10 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The quick-and-dirty TAP test I wrote this morning is intended to introduce duplicates across page boundaries, not to test for ones that got there by normal database activity. In other words, the TAP test forcibly corrupts the index by changing a value on one side of a boundary to be equal to the value on the other side of the boundary. Prior to the corrupting action the values were all unique.\n\nI understood that. I was just pointing out that an index that looks\neven somewhat like that is already quite unnatural.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 May 2024 16:13:20 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "Hi, Mark!\n\n> > At the first glance it's not clear to me:\n> > - why your test creates cross-page unique constraint violations?\n>\n> To see if they are detected.\n>\n> > - how do you know they are not detected?\n>\n> It appears that they are detected. At least, rerunning the test after\n> adjusting the expected output, I no longer see problems.\n>\n\nI understand your point. It was unclear how it modified the index so that\nonly unique constraint check between pages should have failed with other\nchecks passed.\n\nAnyway, thanks for your testing and efforts! I'm happy that the test now\npasses and confirms that amcheck feature works as intended.\n\nKind regards,\nPavel Borisov\n\nHi, Mark! > At the first glance it's not clear to me: \n> - why your test creates cross-page unique constraint violations?\n\nTo see if they are detected.\n\n> - how do you know they are not detected?\n\nIt appears that they are detected.  At least, rerunning the test after adjusting the expected output, I no longer see problems.I understand your point. It was unclear how it modified the index so that only unique constraint check between pages should have failed with other checks passed. Anyway, thanks for your testing and efforts! I'm happy that the test now passes and confirms that amcheck feature works as intended.Kind regards,Pavel Borisov", "msg_date": "Sat, 18 May 2024 01:08:03 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, May 17, 2024 at 1:11 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I think the thread contains enough motivation on why 0002, 0003 and\n> 0004 are material for post-FF. They are fixes and refactoring for\n> new-in-v17 feature. I'm going to push them if no objections.\n>\n> Regarding 0001, I'd like to ask Tom and Mark if they find convincing\n> that given that optimization is small, simple and giving huge effect,\n> it could be pushed post-FF? Otherwise, this could wait for v18.\n\nThe revised version of 0001 unique checking optimization is attached.\nI'm going to push this to v18 if no objections.\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Fri, 26 Jul 2024 15:10:32 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, Jul 26, 2024 at 8:10 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> The revised version of 0001 unique checking optimization is attached.\n> I'm going to push this to v18 if no objections.\n\nI have no reason to specifically object to pushing this into 18, but I\nwould like to point out that you're posting here about this but failed\nto reply to the \"64-bit pg_notify page numbers truncated to 32-bit\",\nan open item that was assigned to you but which, since you didn't\nrespond, was eventually fixed by commits from Michael Paquier.\n\nI know it's easy to lose track of the open items list and I sometimes\nforget to check it myself, but it's rather important to stay on top of\nany open items that get assigned to you.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 10:38:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." }, { "msg_contents": "On Fri, Jul 26, 2024 at 5:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jul 26, 2024 at 8:10 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > The revised version of 0001 unique checking optimization is attached.\n> > I'm going to push this to v18 if no objections.\n>\n> I have no reason to specifically object to pushing this into 18, but I\n> would like to point out that you're posting here about this but failed\n> to reply to the \"64-bit pg_notify page numbers truncated to 32-bit\",\n> an open item that was assigned to you but which, since you didn't\n> respond, was eventually fixed by commits from Michael Paquier.\n>\n> I know it's easy to lose track of the open items list and I sometimes\n> forget to check it myself, but it's rather important to stay on top of\n> any open items that get assigned to you.\n\nYes, it's a pity I miss this open item on me. Besides putting ashes\non my head, I think I could pay more attention on other open items.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 26 Jul 2024 23:53:38 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve amcheck to also check UNIQUE constraint in btree\n index." } ]
[ { "msg_contents": "Hello,\r\n I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit. Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation. Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\r\n\r\n Initially, my goal was to determine feasibility for making this dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum. There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\r\n\r\nI believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began. The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\r\n\r\nI changed this (see attached) to allow the cost_limit to be re-calculated up to the maximum allowable (currently 10,000). This has the effect of allowing users to reload a configuration change and an in-progress vacuum can be ‘sped-up’ by setting either the cost_limit or cost_delay.\r\n\r\nThe problematic piece is:\r\n\r\ndiff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\r\nindex c6ec657a93..d3c6b0d805 100644\r\n--- a/src/backend/postmaster/autovacuum.c\r\n+++ b/src/backend/postmaster/autovacuum.c\r\n@@ -1834,7 +1834,7 @@ autovac_balance_cost(void)\r\n * cost_limit to more than the base value.\r\n */\r\n worker->wi_cost_limit = Max(Min(limit,\r\n- worker->wi_cost_limit_base),\r\n+ MAXVACUUMCOSTLIMIT),\r\n 1);\r\n }\r\n\r\nWe limit the worker to the max cost_limit that was set at the beginning of the vacuum. I introduced the MAXVACUUMCOSTLIMIT constant (currently defined to 10000, which is the currently max limit already defined) in miscadmin.h so that vacuum will now be able to adjust the cost_limit up to 10000 as the upper limit in a currently running vacuum.\r\n\r\nThe tests that I’ve run show that the performance of an existing vacuum can be increased commensurate with the parameter change. Interestingly, autovac_balance_cost(void) is only updating the cost_limit, even if the cost_delay is modified. This is done correctly, it was just a surprise to see the behavior.\r\n\r\n\r\n2021-02-01 13:36:52.346 EST [37891] DEBUG: VACUUM Sleep: Delay: 20.000000, CostBalance: 207, CostLimit: 200, msec: 20.700000\r\n2021-02-01 13:36:52.346 EST [37891] CONTEXT: while scanning block 1824 of relation \"public.blah\"\r\n2021-02-01 13:36:52.362 EST [36460] LOG: received SIGHUP, reloading configuration files\r\n\r\n2021-02-01 13:36:52.364 EST [36460] LOG: parameter \"autovacuum_vacuum_cost_delay\" changed to \"2\"\r\n\\\r\n2021-02-01 13:36:52.365 EST [36463] DEBUG: checkpointer updated shared memory configuration values\r\n2021-02-01 13:36:52.366 EST [36466] DEBUG: autovac_balance_cost(pid=37891 db=13207, rel=16384, dobalance=yes cost_limit=2000, cost_limit_base=200, cost_delay=20)\r\n\r\n2021-02-01 13:36:52.366 EST [36467] DEBUG: received inquiry for database 0\r\n2021-02-01 13:36:52.366 EST [36467] DEBUG: writing stats file \"pg_stat_tmp/global.stat\"\r\n2021-02-01 13:36:52.366 EST [36467] DEBUG: writing stats file \"pg_stat_tmp/db_0.stat\"\r\n2021-02-01 13:36:52.388 EST [37891] DEBUG: VACUUM Sleep: Delay: 20.000000, CostBalance: 2001, CostLimit: 2000, msec: 20.010000", "msg_date": "Mon, 8 Feb 2021 14:48:54 +0000", "msg_from": "\"Mead, Scott\" <meads@amazon.com>", "msg_from_op": true, "msg_subject": "[BUG] Autovacuum not dynamically decreasing cost_limit and cost_delay " }, { "msg_contents": "Thanks for the patch, Mead.\n\nFor 'MAXVACUUMCOSTLIMIT\", it would be nice to follow the current GUC \npattern to do define a constant.\n\nFor example, the constant \"MAX_KILOBYTES\" is defined in guc.h, with a \npattern like, \"MAX_\" to make it easy to read.\n\nBest regards,\n\nDavid\n\nOn 2021-02-08 6:48 a.m., Mead, Scott wrote:\n> Hello,\n>    I recently looked at what it would take to make a running \n> autovacuum pick-up a change to either cost_delay or cost_limit.  Users \n> frequently will have a conservative value set, and then wish to change \n> it when autovacuum initiates a freeze on a relation.  Most users end \n> up finding out they are in ‘to prevent wraparound’ after it has \n> happened, this means that if they want the vacuum to take advantage of \n> more I/O, they need to stop and then restart the currently running \n> vacuum (after reloading the GUCs).\n>   Initially, my goal was to determine feasibility for making this \n> dynamic.  I added debug code to vacuum.c:vacuum_delay_point(void) and \n> found that changes to cost_delay and cost_limit are already processed \n> by a running vacuum.  There was a bug preventing the cost_delay or \n> cost_limit from being configured to allow higher throughput however.\n> I believe this is a bug because currently, autovacuum will dynamically \n> detect and /increase/ the cost_limit or cost_delay, but it can never \n> decrease those values beyond their setting when the vacuum began.  The \n> current behavior is for vacuum to limit the maximum throughput of \n> currently running vacuum processes to the cost_limit that was set when \n> the vacuum process began.\n> I changed this (see attached) to allow the cost_limit to be \n> re-calculated up to the maximum allowable (currently 10,000).  This \n> has the effect of allowing users to reload a configuration change and \n> an in-progress vacuum can be ‘sped-up’ by setting either the \n> cost_limit or cost_delay.\n> The problematic piece is:\n> diff --git a/src/backend/postmaster/autovacuum.c \n> b/src/backend/postmaster/autovacuum.c\n> index c6ec657a93..d3c6b0d805 100644\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -1834,7 +1834,7 @@ autovac_balance_cost(void)\n> * cost_limit to more than the base value.\n> */\n> worker->wi_cost_limit = *Max(Min(limit,*\n> *- worker->wi_cost_limit_base*),\n> +                                 MAXVACUUMCOSTLIMIT),\n> 1);\n> }\n> We limit the worker to the max cost_limit that was set at the \n> beginning of the vacuum.  I introduced the MAXVACUUMCOSTLIMIT constant \n> (currently defined to 10000, which is the currently max limit already \n> defined) in miscadmin.h so that vacuum will now be able to adjust the \n> cost_limit up to 10000 as the upper limit in a currently running vacuum.\n>\n> The tests that I’ve run show that the performance of an existing \n> vacuum can be increased commensurate with the parameter change. \n>  Interestingly, /autovac_balance_cost(void) /is only updating the \n> cost_limit, even if the cost_delay is modified.  This is done \n> correctly, it was just a surprise to see the behavior.\n>\n>\n> 2021-02-01 13:36:52.346 EST [37891] DEBUG:  VACUUM Sleep: Delay: \n> 20.000000, CostBalance: 207, CostLimit: *200*, msec: 20.700000\n> 2021-02-01 13:36:52.346 EST [37891] CONTEXT:  while scanning block \n> 1824 of relation \"public.blah\"\n> 2021-02-01 13:36:52.362 EST [36460] LOG:  received SIGHUP, reloading \n> configuration files\n> *\n> *\n> *2021-02-01 13:36:52.364 EST [36460] LOG:  parameter \n> \"autovacuum_vacuum_cost_delay\" changed to \"2\"*\n> \\\n> 2021-02-01 13:36:52.365 EST [36463] DEBUG:  checkpointer updated \n> shared memory configuration values\n> 2021-02-01 13:36:52.366 EST [36466] DEBUG: \n>  autovac_balance_cost(pid=37891 db=13207, rel=16384, dobalance=yes \n> cost_limit=2000, cost_limit_base=200, cost_delay=20)\n>\n> 2021-02-01 13:36:52.366 EST [36467] DEBUG:  received inquiry for \n> database 0\n> 2021-02-01 13:36:52.366 EST [36467] DEBUG:  writing stats file \n> \"pg_stat_tmp/global.stat\"\n> 2021-02-01 13:36:52.366 EST [36467] DEBUG:  writing stats file \n> \"pg_stat_tmp/db_0.stat\"\n> 2021-02-01 13:36:52.388 EST [37891] DEBUG:  VACUUM Sleep: Delay: \n> 20.000000, CostBalance: 2001, CostLimit: 2000, msec: 20.010000\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n\n\n\n Thanks for the patch, Mead.\nFor 'MAXVACUUMCOSTLIMIT\", it would be nice to follow the current\n GUC pattern to do define a constant. \n\nFor example, the constant \"MAX_KILOBYTES\" is defined in guc.h,\n with a pattern like, \"MAX_\" to make it easy to read.\n\nBest regards,\nDavid\n\nOn 2021-02-08 6:48 a.m., Mead, Scott\n wrote:\n\n\n\n\nHello,\n\n   I recently looked at\n what it would take to make a running autovacuum pick-up a\n change to either cost_delay or cost_limit.  Users frequently\n will have a conservative value set, and then wish to change it\n when autovacuum initiates a freeze on a relation.  Most users\n end up finding out they are in ‘to prevent wraparound’ after\n it has happened, this means that if they want the vacuum to\n take advantage of more I/O, they need to stop and then restart\n the currently running vacuum (after reloading the GUCs).  \n\n \n\n  Initially, my goal was\n to determine feasibility for making this dynamic.  I added\n debug code to vacuum.c:vacuum_delay_point(void) and found that\n changes to cost_delay and cost_limit are already processed by\n a running vacuum.  There was a bug preventing the cost_delay\n or cost_limit from being configured to allow higher throughput\n however.\n\n \nI believe this is a bug\n because currently, autovacuum will dynamically detect and\n increase the cost_limit or cost_delay, but\n it can never decrease those values beyond their setting when\n the vacuum began.  The current\n behavior is for vacuum to limit the maximum throughput of\n currently running vacuum processes to the cost_limit that was\n set when the vacuum process began. \n\n\n\n \n\nI changed this (see\n attached) to allow the cost_limit to be re-calculated up to\n the maximum allowable (currently 10,000).  This has the effect\n of allowing users to reload a configuration change and an\n in-progress vacuum can be ‘sped-up’ by setting either the\n cost_limit or cost_delay.\n\n \n\nThe problematic piece\n is:\n\n \n\ndiff\n --git a/src/backend/postmaster/autovacuum.c\n b/src/backend/postmaster/autovacuum.c\n\nindex\n c6ec657a93..d3c6b0d805 100644\n\n---\n a/src/backend/postmaster/autovacuum.c\n\n+++\n b/src/backend/postmaster/autovacuum.c\n\n@@\n -1834,7 +1834,7 @@ autovac_balance_cost(void)\n\n            \n * cost_limit to more than the base value.\n\n            \n */\n\n           \n worker->wi_cost_limit = Max(Min(limit,\n\n-                                          \n worker->wi_cost_limit_base),\n\n+          \n                                 MAXVACUUMCOSTLIMIT),\n\n                                       \n 1);\n\n       \n }\n\n \n\nWe limit the worker to\n the max cost_limit that was set at the beginning of the\n vacuum.  I introduced the MAXVACUUMCOSTLIMIT constant\n (currently defined to 10000, which is the currently max limit\n already defined) in miscadmin.h so that vacuum will now be\n able to adjust the cost_limit up to 10000 as the upper limit\n in a currently running vacuum.\n\n\n\nThe tests that I’ve run show that the\n performance of an existing vacuum can be\n increased commensurate with the parameter change.\n  Interestingly, autovac_balance_cost(void) is only updating the cost_limit, even if the\n cost_delay is modified.  This is done correctly, it was\n just a surprise to see the behavior.\n\n\n\n\n\n\n\n2021-02-01\n 13:36:52.346 EST [37891] DEBUG:  VACUUM Sleep: Delay:\n 20.000000, CostBalance: 207, CostLimit: 200,\n msec: 20.700000\n\n2021-02-01\n 13:36:52.346 EST [37891] CONTEXT:  while scanning block\n 1824 of relation \"public.blah\"\n\n2021-02-01\n 13:36:52.362 EST [36460] LOG:  received SIGHUP, reloading\n configuration files\n\n\n\n\n2021-02-01 13:36:52.364 EST [36460] LOG:\n  parameter \"autovacuum_vacuum_cost_delay\" changed to \"2\"\n\n\\\n\n2021-02-01\n 13:36:52.365 EST [36463] DEBUG:  checkpointer updated\n shared memory configuration values\n\n2021-02-01\n 13:36:52.366 EST [36466] DEBUG:\n  autovac_balance_cost(pid=37891 db=13207, rel=16384,\n dobalance=yes cost_limit=2000, cost_limit_base=200,\n cost_delay=20)\n\n\n\n\n2021-02-01\n 13:36:52.366 EST [36467] DEBUG:  received inquiry for\n database 0\n\n2021-02-01\n 13:36:52.366 EST [36467] DEBUG:  writing stats file\n \"pg_stat_tmp/global.stat\"\n\n2021-02-01\n 13:36:52.366 EST [36467] DEBUG:  writing stats file\n \"pg_stat_tmp/db_0.stat\"\n\n2021-02-01\n 13:36:52.388 EST [37891] DEBUG:  VACUUM Sleep: Delay:\n 20.000000, CostBalance: 2001, CostLimit: 2000, msec:\n 20.010000\n\n\n\n\n\n\n\n-- \n David\n\n Software Engineer\n Highgo Software Inc. (Canada)\nwww.highgo.ca", "msg_date": "Fri, 12 Feb 2021 12:03:56 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\n>\n> Hello,\n> I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit. Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation. Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\n>\n> Initially, my goal was to determine feasibility for making this dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum. There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\n>\n> I believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began. The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\n\nThanks for your report.\n\nI've not looked at the patch yet but I agree that the calculation for\nautovacuum cost delay seems not to work fine if vacuum-delay-related\nparameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\nvacuuming a table to speed up running autovacuums. Here is my\nanalysis:\n\nSuppose we have the following parameters and 3 autovacuum workers are\nrunning on different tables:\n\nautovacuum_vacuum_cost_delay = 100\nautovacuum_vacuum_cost_limit = 100\n\nVacuum cost-based delay parameters for each workers are follows:\n\nworker->wi_cost_limit_base = 100\nworker->wi_cost_limit = 66\nworker->wi_cost_delay = 100\n\nEach running autovacuum has \"wi_cost_limit = 66\" because the total\nlimit (100) is equally rationed. And another point is that the total\nwi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\n100. Which are fine.\n\nHere let's change autovacuum_vacuum_cost_delay/limit value to speed up\nrunning autovacuums.\n\nCase 1 : increasing autovacuum_vacuum_cost_limit to 1000.\n\nAfter reloading the configuration file, vacuum cost-based delay\nparameters for each worker become as follows:\n\nworker->wi_cost_limit_base = 100\nworker->wi_cost_limit = 100\nworker->wi_cost_delay = 100\n\nIf we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\nwould be 333. But since we cap it by wi_cost_limit_base, the\nwi_cost_limit is 100. I think this is what Mead reported here.\n\nCase 2 : decreasing autovacuum_vacuum_cost_delay to 10.\n\nAfter reloading the configuration file, vacuum cost-based delay\nparameters for each workers become as follows:\n\nworker->wi_cost_limit_base = 100\nworker->wi_cost_limit = 100\nworker->wi_cost_delay = 100\n\nActually, the result is the same as case 1. But In this case, the\ntotal cost among the three workers is 300, which is greater than\nautovacuum_vacuum_cost_limit, 100. This behavior violates what the\ndocumentation explains in the description of\nautovacuum_vacuum_cost_limit:\n\n---\nNote that the value is distributed proportionally among the running\nautovacuum workers, if there is more than one, so that the sum of the\nlimits for each worker does not exceed the value of this variable.\n---\n\nIt seems to me that those problems come from the fact that we don't\nchange both wi_cost_limit_base and wi_cost_delay during auto-vacuuming\na table in spite of using autovacuum_vac_cost_limit/delay to calculate\ncost_avail. Such a wrong calculation happens until all running\nautovacuum workers finish the current vacuums. When a worker starts to\nprocess a new table, it resets both wi_cost_limit_base and\nwi_cost_delay.\n\nLooking at autovac_balance_cost(), it considers worker's\nwi_cost_limit_base to calculate the total base cost limit of\nparticipating active workers as follows:\n\ncost_total +=\n (double) worker->wi_cost_limit_base / worker->wi_cost_delay;\n\nBut what is the point of calculating it while assuming each worker\nhaving a different cost limit? Since workers vacuuming on a table\nwhose cost parameters are set individually doesn't participate in this\ncalculation (by commit 1021bd6a8 in 2014), having at_dobalance true, I\nwonder if we can just assume all workers have the same cost_limit and\ncost_delay except for workers setting at_dobalance true. If we can do\nthat, I guess we no longer need wi_cost_limit_base.\n\nAlso, we don't change wi_cost_delay during vacuuming a table, which\nseems wrong to me. autovac_balance_cost() can change workers'\nwi_cost_delay, eventually applying to VacuumCostDelay.\n\nWhat do you think?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 2 Mar 2021 10:43:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "\r\n\r\n> On Mar 1, 2021, at 8:43 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n> \r\n> \r\n> \r\n> On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\r\n>> \r\n>> Hello,\r\n>> I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit. Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation. Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\r\n>> \r\n>> Initially, my goal was to determine feasibility for making this dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum. There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\r\n>> \r\n>> I believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began. The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\r\n> \r\n> Thanks for your report.\r\n> \r\n> I've not looked at the patch yet but I agree that the calculation for\r\n> autovacuum cost delay seems not to work fine if vacuum-delay-related\r\n> parameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\r\n> vacuuming a table to speed up running autovacuums. Here is my\r\n> analysis:\r\n\r\n\r\nI appreciate your in-depth analysis and will comment in-line. That said, I still think it’s important that the attached path is applied. As it is today, a simple few lines of code prevent users from being able to increase the throughput on vacuums that are running without having to cancel them first.\r\n\r\nThe patch that I’ve provided allows users to decrease their vacuum_cost_delay and get an immediate boost in performance to their running vacuum jobs.\r\n\r\n\r\n> \r\n> Suppose we have the following parameters and 3 autovacuum workers are\r\n> running on different tables:\r\n> \r\n> autovacuum_vacuum_cost_delay = 100\r\n> autovacuum_vacuum_cost_limit = 100\r\n> \r\n> Vacuum cost-based delay parameters for each workers are follows:\r\n> \r\n> worker->wi_cost_limit_base = 100\r\n> worker->wi_cost_limit = 66\r\n> worker->wi_cost_delay = 100\r\n> \r\n> Each running autovacuum has \"wi_cost_limit = 66\" because the total\r\n> limit (100) is equally rationed. And another point is that the total\r\n> wi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\r\n> 100. Which are fine.\r\n> \r\n> Here let's change autovacuum_vacuum_cost_delay/limit value to speed up\r\n> running autovacuums.\r\n> \r\n> Case 1 : increasing autovacuum_vacuum_cost_limit to 1000.\r\n> \r\n> After reloading the configuration file, vacuum cost-based delay\r\n> parameters for each worker become as follows:\r\n> \r\n> worker->wi_cost_limit_base = 100\r\n> worker->wi_cost_limit = 100\r\n> worker->wi_cost_delay = 100\r\n> \r\n> If we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\r\n> would be 333. But since we cap it by wi_cost_limit_base, the\r\n> wi_cost_limit is 100. I think this is what Mead reported here.\r\n\r\n\r\nYes, this is exactly correct. The cost_limit is capped at the cost_limit that was set during the start of a running vacuum. My patch changes this cap to be the max allowed cost_limit (10,000).\r\n\r\n\r\n\r\n> \r\n> Case 2 : decreasing autovacuum_vacuum_cost_delay to 10.\r\n> \r\n> After reloading the configuration file, vacuum cost-based delay\r\n> parameters for each workers become as follows:\r\n> \r\n> worker->wi_cost_limit_base = 100\r\n> worker->wi_cost_limit = 100\r\n> worker->wi_cost_delay = 100\r\n> \r\n> Actually, the result is the same as case 1. But In this case, the\r\n> total cost among the three workers is 300, which is greater than\r\n> autovacuum_vacuum_cost_limit, 100. This behavior violates what the\r\n> documentation explains in the description of\r\n> autovacuum_vacuum_cost_limit:\r\n> \r\n> ---\r\n> Note that the value is distributed proportionally among the running\r\n> autovacuum workers, if there is more than one, so that the sum of the\r\n> limits for each worker does not exceed the value of this variable.\r\n> ---\r\n> \r\n> It seems to me that those problems come from the fact that we don't\r\n> change both wi_cost_limit_base and wi_cost_delay during auto-vacuuming\r\n> a table in spite of using autovacuum_vac_cost_limit/delay to calculate\r\n> cost_avail. Such a wrong calculation happens until all running\r\n> autovacuum workers finish the current vacuums. When a worker starts to\r\n> process a new table, it resets both wi_cost_limit_base and\r\n> wi_cost_delay.\r\n\r\n\r\nExactly. The tests I ran with extra debugging show exactly this behavior.\r\n\r\n> \r\n> Looking at autovac_balance_cost(), it considers worker's\r\n> wi_cost_limit_base to calculate the total base cost limit of\r\n> participating active workers as follows:\r\n> \r\n> cost_total +=\r\n> (double) worker->wi_cost_limit_base / worker->wi_cost_delay;\r\n> \r\n> But what is the point of calculating it while assuming each worker\r\n> having a different cost limit? Since workers vacuuming on a table\r\n> whose cost parameters are set individually doesn't participate in this\r\n> calculation (by commit 1021bd6a8 in 2014), having at_dobalance true, I\r\n> wonder if we can just assume all workers have the same cost_limit and\r\n> cost_delay except for workers setting at_dobalance true. If we can do\r\n> that, I guess we no longer need wi_cost_limit_base.\r\n\r\nThis is where I wasn’t sure the exact reason for maintaining the wi_cost_limit_base. It wasn’t immediately clear if there was a reason other than just tracking what it was at the start of the vacuum.\r\n\r\n\r\n> \r\n> Also, we don't change wi_cost_delay during vacuuming a table, which\r\n> seems wrong to me. autovac_balance_cost() can change workers'\r\n> wi_cost_delay, eventually applying to VacuumCostDelay.\r\n> \r\n> What do you think?\r\n\r\nYeah, I think updates to any of these throttles dynamically make sense, especially instead of changing other parameters when a user sets a different one (delay vs. limit).\r\n\r\n\r\n\r\n\r\n\r\n> \r\n> Regards,\r\n> \r\n> --\r\n> Masahiko Sawada\r\n> EDB: https://www.enterprisedb.com/\r\n> \r\n> \r\n\r\n", "msg_date": "Wed, 14 Apr 2021 14:17:45 +0000", "msg_from": "\"Mead, Scott\" <meads@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "On Wed, Apr 14, 2021 at 11:17 PM Mead, Scott <meads@amazon.com> wrote:\n>\n>\n>\n> > On Mar 1, 2021, at 8:43 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\n> >>\n> >> Hello,\n> >> I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit. Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation. Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\n> >>\n> >> Initially, my goal was to determine feasibility for making this dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum. There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\n> >>\n> >> I believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began. The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\n> >\n> > Thanks for your report.\n> >\n> > I've not looked at the patch yet but I agree that the calculation for\n> > autovacuum cost delay seems not to work fine if vacuum-delay-related\n> > parameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\n> > vacuuming a table to speed up running autovacuums. Here is my\n> > analysis:\n>\n>\n> I appreciate your in-depth analysis and will comment in-line. That said, I still think it’s important that the attached path is applied. As it is today, a simple few lines of code prevent users from being able to increase the throughput on vacuums that are running without having to cancel them first.\n>\n> The patch that I’ve provided allows users to decrease their vacuum_cost_delay and get an immediate boost in performance to their running vacuum jobs.\n>\n>\n> >\n> > Suppose we have the following parameters and 3 autovacuum workers are\n> > running on different tables:\n> >\n> > autovacuum_vacuum_cost_delay = 100\n> > autovacuum_vacuum_cost_limit = 100\n> >\n> > Vacuum cost-based delay parameters for each workers are follows:\n> >\n> > worker->wi_cost_limit_base = 100\n> > worker->wi_cost_limit = 66\n> > worker->wi_cost_delay = 100\n\nSorry, worker->wi_cost_limit should be 33.\n\n> >\n> > Each running autovacuum has \"wi_cost_limit = 66\" because the total\n> > limit (100) is equally rationed. And another point is that the total\n> > wi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\n> > 100. Which are fine.\n\nSo the total wi_cost_limit, 99, is less than autovacuum_vacuum_cost_limit, 100.\n\n> >\n> > Here let's change autovacuum_vacuum_cost_delay/limit value to speed up\n> > running autovacuums.\n> >\n> > Case 1 : increasing autovacuum_vacuum_cost_limit to 1000.\n> >\n> > After reloading the configuration file, vacuum cost-based delay\n> > parameters for each worker become as follows:\n> >\n> > worker->wi_cost_limit_base = 100\n> > worker->wi_cost_limit = 100\n> > worker->wi_cost_delay = 100\n> >\n> > If we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\n> > would be 333. But since we cap it by wi_cost_limit_base, the\n> > wi_cost_limit is 100. I think this is what Mead reported here.\n>\n>\n> Yes, this is exactly correct. The cost_limit is capped at the cost_limit that was set during the start of a running vacuum. My patch changes this cap to be the max allowed cost_limit (10,000).\n\nThe comment of worker's limit calculation says:\n\n /*\n * We put a lower bound of 1 on the cost_limit, to avoid division-\n * by-zero in the vacuum code. Also, in case of roundoff trouble\n * in these calculations, let's be sure we don't ever set\n * cost_limit to more than the base value.\n */\n worker->wi_cost_limit = Max(Min(limit,\n worker->wi_cost_limit_base),\n 1);\n\nIf we use the max cost_limit as the upper bound here, the worker's\nlimit could unnecessarily be higher than the base value in case of\nroundoff trouble? I think that the problem here is rather that we\ndon't update wi_cost_limit_base and wi_cost_delay when rebalancing the\ncost.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 26 May 2021 17:00:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "On Wed, May 26, 2021 at 4:01 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Wed, Apr 14, 2021 at 11:17 PM Mead, Scott <meads@amazon.com> wrote:\n> >\n> >\n> >\n> > > On Mar 1, 2021, at 8:43 PM, Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > >\n> > > CAUTION: This email originated from outside of the organization. Do\n> not click links or open attachments unless you can confirm the sender and\n> know the content is safe.\n> > >\n> > >\n> > >\n> > > On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\n> > >>\n> > >> Hello,\n> > >> I recently looked at what it would take to make a running\n> autovacuum pick-up a change to either cost_delay or cost_limit. Users\n> frequently will have a conservative value set, and then wish to change it\n> when autovacuum initiates a freeze on a relation. Most users end up\n> finding out they are in ‘to prevent wraparound’ after it has happened, this\n> means that if they want the vacuum to take advantage of more I/O, they need\n> to stop and then restart the currently running vacuum (after reloading the\n> GUCs).\n> > >>\n> > >> Initially, my goal was to determine feasibility for making this\n> dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found\n> that changes to cost_delay and cost_limit are already processed by a\n> running vacuum. There was a bug preventing the cost_delay or cost_limit\n> from being configured to allow higher throughput however.\n> > >>\n> > >> I believe this is a bug because currently, autovacuum will\n> dynamically detect and increase the cost_limit or cost_delay, but it can\n> never decrease those values beyond their setting when the vacuum began.\n> The current behavior is for vacuum to limit the maximum throughput of\n> currently running vacuum processes to the cost_limit that was set when the\n> vacuum process began.\n> > >\n> > > Thanks for your report.\n> > >\n> > > I've not looked at the patch yet but I agree that the calculation for\n> > > autovacuum cost delay seems not to work fine if vacuum-delay-related\n> > > parameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\n> > > vacuuming a table to speed up running autovacuums. Here is my\n> > > analysis:\n> >\n> >\n> > I appreciate your in-depth analysis and will comment in-line. That\n> said, I still think it’s important that the attached path is applied. As\n> it is today, a simple few lines of code prevent users from being able to\n> increase the throughput on vacuums that are running without having to\n> cancel them first.\n> >\n> > The patch that I’ve provided allows users to decrease their\n> vacuum_cost_delay and get an immediate boost in performance to their\n> running vacuum jobs.\n> >\n> >\n> > >\n> > > Suppose we have the following parameters and 3 autovacuum workers are\n> > > running on different tables:\n> > >\n> > > autovacuum_vacuum_cost_delay = 100\n> > > autovacuum_vacuum_cost_limit = 100\n> > >\n> > > Vacuum cost-based delay parameters for each workers are follows:\n> > >\n> > > worker->wi_cost_limit_base = 100\n> > > worker->wi_cost_limit = 66\n> > > worker->wi_cost_delay = 100\n>\n> Sorry, worker->wi_cost_limit should be 33.\n>\n> > >\n> > > Each running autovacuum has \"wi_cost_limit = 66\" because the total\n> > > limit (100) is equally rationed. And another point is that the total\n> > > wi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\n> > > 100. Which are fine.\n>\n> So the total wi_cost_limit, 99, is less than autovacuum_vacuum_cost_limit,\n> 100.\n>\n> > >\n> > > Here let's change autovacuum_vacuum_cost_delay/limit value to speed up\n> > > running autovacuums.\n> > >\n> > > Case 1 : increasing autovacuum_vacuum_cost_limit to 1000.\n> > >\n> > > After reloading the configuration file, vacuum cost-based delay\n> > > parameters for each worker become as follows:\n> > >\n> > > worker->wi_cost_limit_base = 100\n> > > worker->wi_cost_limit = 100\n> > > worker->wi_cost_delay = 100\n> > >\n> > > If we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\n> > > would be 333. But since we cap it by wi_cost_limit_base, the\n> > > wi_cost_limit is 100. I think this is what Mead reported here.\n> >\n> >\n> > Yes, this is exactly correct. The cost_limit is capped at the\n> cost_limit that was set during the start of a running vacuum. My patch\n> changes this cap to be the max allowed cost_limit (10,000).\n>\n> The comment of worker's limit calculation says:\n>\n> /*\n> * We put a lower bound of 1 on the cost_limit, to avoid division-\n> * by-zero in the vacuum code. Also, in case of roundoff trouble\n> * in these calculations, let's be sure we don't ever set\n> * cost_limit to more than the base value.\n> */\n> worker->wi_cost_limit = Max(Min(limit,\n> worker->wi_cost_limit_base),\n> 1);\n>\n> If we use the max cost_limit as the upper bound here, the worker's\n> limit could unnecessarily be higher than the base value in case of\n> roundoff trouble? I think that the problem here is rather that we\n> don't update wi_cost_limit_base and wi_cost_delay when rebalancing the\n> cost.\n>\n\nCurrently, vacuum always limits you to the cost_limit_base from the time\nthat your vacuum started. I'm not sure why, I don't believe it's rounding\nrelated because the rest of the rebalancing code works properly. ISTM that\nlooking simply allowing the updated cost_limit is a simple solution since\nthe rebalance code will automatically take it into account.\n\n\n\n\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n>\n>\n\n-- \n--\nScott Mead\n*scott@meads.us <scott@meads.us>*\n\nOn Wed, May 26, 2021 at 4:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Wed, Apr 14, 2021 at 11:17 PM Mead, Scott <meads@amazon.com> wrote:\n>\n>\n>\n> > On Mar 1, 2021, at 8:43 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\n> >>\n> >> Hello,\n> >>   I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit.  Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation.  Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\n> >>\n> >>  Initially, my goal was to determine feasibility for making this dynamic.  I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum.  There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\n> >>\n> >> I believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began.  The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\n> >\n> > Thanks for your report.\n> >\n> > I've not looked at the patch yet but I agree that the calculation for\n> > autovacuum cost delay seems not to work fine if vacuum-delay-related\n> > parameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\n> > vacuuming a table to speed up running autovacuums. Here is my\n> > analysis:\n>\n>\n> I appreciate your in-depth analysis and will comment in-line.  That said, I still think it’s important that the attached path is applied.  As it is today, a simple few lines of code prevent users from being able to increase the throughput on vacuums that are running without having to cancel them first.\n>\n> The patch that I’ve provided allows users to decrease their vacuum_cost_delay and get an immediate boost in performance to their running vacuum jobs.\n>\n>\n> >\n> > Suppose we have the following parameters and 3 autovacuum workers are\n> > running on different tables:\n> >\n> > autovacuum_vacuum_cost_delay = 100\n> > autovacuum_vacuum_cost_limit = 100\n> >\n> > Vacuum cost-based delay parameters for each workers are follows:\n> >\n> > worker->wi_cost_limit_base = 100\n> > worker->wi_cost_limit = 66\n> > worker->wi_cost_delay = 100\n\nSorry, worker->wi_cost_limit should be 33.\n\n> >\n> > Each running autovacuum has \"wi_cost_limit = 66\" because the total\n> > limit (100) is equally rationed. And another point is that the total\n> > wi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\n> > 100. Which are fine.\n\nSo the total wi_cost_limit, 99, is less than autovacuum_vacuum_cost_limit, 100.\n\n> >\n> > Here let's change autovacuum_vacuum_cost_delay/limit value to speed up\n> > running autovacuums.\n> >\n> > Case 1 : increasing autovacuum_vacuum_cost_limit to 1000.\n> >\n> > After reloading the configuration file, vacuum cost-based delay\n> > parameters for each worker become as follows:\n> >\n> > worker->wi_cost_limit_base = 100\n> > worker->wi_cost_limit = 100\n> > worker->wi_cost_delay = 100\n> >\n> > If we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\n> > would be 333. But since we cap it by wi_cost_limit_base, the\n> > wi_cost_limit is 100. I think this is what Mead reported here.\n>\n>\n> Yes, this is exactly correct.  The cost_limit is capped at the cost_limit that was set during the start of a running vacuum.  My patch changes this cap to be the max allowed cost_limit (10,000).\n\nThe comment of worker's limit calculation says:\n\n        /*\n         * We put a lower bound of 1 on the cost_limit, to avoid division-\n         * by-zero in the vacuum code.  Also, in case of roundoff trouble\n         * in these calculations, let's be sure we don't ever set\n         * cost_limit to more than the base value.\n         */\n        worker->wi_cost_limit = Max(Min(limit,\n                                        worker->wi_cost_limit_base),\n                                    1);\n\nIf we use the max cost_limit as the upper bound here, the worker's\nlimit could unnecessarily be higher than the base value in case of\nroundoff trouble? I think that the problem here is rather that we\ndon't update wi_cost_limit_base and wi_cost_delay when rebalancing the\ncost.Currently, vacuum always limits you to the cost_limit_base from the time that your vacuum started.  I'm not sure why, I don't believe it's rounding related because the rest of the rebalancing code works properly.  ISTM that looking simply allowing the updated cost_limit is a simple solution since the rebalance code will automatically take it into account. \n\nRegards,\n\n--\nMasahiko Sawada\nEDB:  https://www.enterprisedb.com/\n\n\n-- --Scott Meadscott@meads.us", "msg_date": "Tue, 26 Oct 2021 11:23:41 -0400", "msg_from": "Scott Mead <scott@meads.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "Moving to bugs list.\n\nOn Tue, Oct 26, 2021 at 11:23 AM Scott Mead <scott@meads.us> wrote:\n\n>\n>\n> On Wed, May 26, 2021 at 4:01 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n>\n>> On Wed, Apr 14, 2021 at 11:17 PM Mead, Scott <meads@amazon.com> wrote:\n>> >\n>> >\n>> >\n>> > > On Mar 1, 2021, at 8:43 PM, Masahiko Sawada <sawada.mshk@gmail.com>\n>> wrote:\n>> > >\n>> > > CAUTION: This email originated from outside of the organization. Do\n>> not click links or open attachments unless you can confirm the sender and\n>> know the content is safe.\n>> > >\n>> > >\n>> > >\n>> > > On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\n>> > >>\n>> > >> Hello,\n>> > >> I recently looked at what it would take to make a running\n>> autovacuum pick-up a change to either cost_delay or cost_limit. Users\n>> frequently will have a conservative value set, and then wish to change it\n>> when autovacuum initiates a freeze on a relation. Most users end up\n>> finding out they are in ‘to prevent wraparound’ after it has happened, this\n>> means that if they want the vacuum to take advantage of more I/O, they need\n>> to stop and then restart the currently running vacuum (after reloading the\n>> GUCs).\n>> > >>\n>> > >> Initially, my goal was to determine feasibility for making this\n>> dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found\n>> that changes to cost_delay and cost_limit are already processed by a\n>> running vacuum. There was a bug preventing the cost_delay or cost_limit\n>> from being configured to allow higher throughput however.\n>> > >>\n>> > >> I believe this is a bug because currently, autovacuum will\n>> dynamically detect and increase the cost_limit or cost_delay, but it can\n>> never decrease those values beyond their setting when the vacuum began.\n>> The current behavior is for vacuum to limit the maximum throughput of\n>> currently running vacuum processes to the cost_limit that was set when the\n>> vacuum process began.\n>> > >\n>> > > Thanks for your report.\n>> > >\n>> > > I've not looked at the patch yet but I agree that the calculation for\n>> > > autovacuum cost delay seems not to work fine if vacuum-delay-related\n>> > > parameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\n>> > > vacuuming a table to speed up running autovacuums. Here is my\n>> > > analysis:\n>> >\n>> >\n>> > I appreciate your in-depth analysis and will comment in-line. That\n>> said, I still think it’s important that the attached path is applied. As\n>> it is today, a simple few lines of code prevent users from being able to\n>> increase the throughput on vacuums that are running without having to\n>> cancel them first.\n>> >\n>> > The patch that I’ve provided allows users to decrease their\n>> vacuum_cost_delay and get an immediate boost in performance to their\n>> running vacuum jobs.\n>> >\n>> >\n>> > >\n>> > > Suppose we have the following parameters and 3 autovacuum workers are\n>> > > running on different tables:\n>> > >\n>> > > autovacuum_vacuum_cost_delay = 100\n>> > > autovacuum_vacuum_cost_limit = 100\n>> > >\n>> > > Vacuum cost-based delay parameters for each workers are follows:\n>> > >\n>> > > worker->wi_cost_limit_base = 100\n>> > > worker->wi_cost_limit = 66\n>> > > worker->wi_cost_delay = 100\n>>\n>> Sorry, worker->wi_cost_limit should be 33.\n>>\n>> > >\n>> > > Each running autovacuum has \"wi_cost_limit = 66\" because the total\n>> > > limit (100) is equally rationed. And another point is that the total\n>> > > wi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\n>> > > 100. Which are fine.\n>>\n>> So the total wi_cost_limit, 99, is less than\n>> autovacuum_vacuum_cost_limit, 100.\n>>\n>> > >\n>> > > Here let's change autovacuum_vacuum_cost_delay/limit value to speed up\n>> > > running autovacuums.\n>> > >\n>> > > Case 1 : increasing autovacuum_vacuum_cost_limit to 1000.\n>> > >\n>> > > After reloading the configuration file, vacuum cost-based delay\n>> > > parameters for each worker become as follows:\n>> > >\n>> > > worker->wi_cost_limit_base = 100\n>> > > worker->wi_cost_limit = 100\n>> > > worker->wi_cost_delay = 100\n>> > >\n>> > > If we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\n>> > > would be 333. But since we cap it by wi_cost_limit_base, the\n>> > > wi_cost_limit is 100. I think this is what Mead reported here.\n>> >\n>> >\n>> > Yes, this is exactly correct. The cost_limit is capped at the\n>> cost_limit that was set during the start of a running vacuum. My patch\n>> changes this cap to be the max allowed cost_limit (10,000).\n>>\n>> The comment of worker's limit calculation says:\n>>\n>> /*\n>> * We put a lower bound of 1 on the cost_limit, to avoid division-\n>> * by-zero in the vacuum code. Also, in case of roundoff trouble\n>> * in these calculations, let's be sure we don't ever set\n>> * cost_limit to more than the base value.\n>> */\n>> worker->wi_cost_limit = Max(Min(limit,\n>> worker->wi_cost_limit_base),\n>> 1);\n>>\n>> If we use the max cost_limit as the upper bound here, the worker's\n>> limit could unnecessarily be higher than the base value in case of\n>> roundoff trouble? I think that the problem here is rather that we\n>> don't update wi_cost_limit_base and wi_cost_delay when rebalancing the\n>> cost.\n>>\n>\n> Currently, vacuum always limits you to the cost_limit_base from the time\n> that your vacuum started. I'm not sure why, I don't believe it's rounding\n> related because the rest of the rebalancing code works properly. ISTM that\n> looking simply allowing the updated cost_limit is a simple solution since\n> the rebalance code will automatically take it into account.\n>\n>\n>\n>\n>\n>>\n>> Regards,\n>>\n>> --\n>> Masahiko Sawada\n>> EDB: https://www.enterprisedb.com/\n>>\n>>\n>>\n>\n> --\n> --\n> Scott Mead\n> *scott@meads.us <scott@meads.us>*\n>\n\n\n-- \n--\nScott Mead\n*scott@meads.us <scott@meads.us>*\n\nMoving to bugs list. On Tue, Oct 26, 2021 at 11:23 AM Scott Mead <scott@meads.us> wrote:On Wed, May 26, 2021 at 4:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Wed, Apr 14, 2021 at 11:17 PM Mead, Scott <meads@amazon.com> wrote:\n>\n>\n>\n> > On Mar 1, 2021, at 8:43 PM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> >\n> >\n> >\n> > On Mon, Feb 8, 2021 at 11:49 PM Mead, Scott <meads@amazon.com> wrote:\n> >>\n> >> Hello,\n> >>   I recently looked at what it would take to make a running autovacuum pick-up a change to either cost_delay or cost_limit.  Users frequently will have a conservative value set, and then wish to change it when autovacuum initiates a freeze on a relation.  Most users end up finding out they are in ‘to prevent wraparound’ after it has happened, this means that if they want the vacuum to take advantage of more I/O, they need to stop and then restart the currently running vacuum (after reloading the GUCs).\n> >>\n> >>  Initially, my goal was to determine feasibility for making this dynamic.  I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum.  There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\n> >>\n> >> I believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began.  The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\n> >\n> > Thanks for your report.\n> >\n> > I've not looked at the patch yet but I agree that the calculation for\n> > autovacuum cost delay seems not to work fine if vacuum-delay-related\n> > parameters (e.g., autovacuum_vacuum_cost_delay etc) are changed during\n> > vacuuming a table to speed up running autovacuums. Here is my\n> > analysis:\n>\n>\n> I appreciate your in-depth analysis and will comment in-line.  That said, I still think it’s important that the attached path is applied.  As it is today, a simple few lines of code prevent users from being able to increase the throughput on vacuums that are running without having to cancel them first.\n>\n> The patch that I’ve provided allows users to decrease their vacuum_cost_delay and get an immediate boost in performance to their running vacuum jobs.\n>\n>\n> >\n> > Suppose we have the following parameters and 3 autovacuum workers are\n> > running on different tables:\n> >\n> > autovacuum_vacuum_cost_delay = 100\n> > autovacuum_vacuum_cost_limit = 100\n> >\n> > Vacuum cost-based delay parameters for each workers are follows:\n> >\n> > worker->wi_cost_limit_base = 100\n> > worker->wi_cost_limit = 66\n> > worker->wi_cost_delay = 100\n\nSorry, worker->wi_cost_limit should be 33.\n\n> >\n> > Each running autovacuum has \"wi_cost_limit = 66\" because the total\n> > limit (100) is equally rationed. And another point is that the total\n> > wi_cost_limit (198 = 66*3) is less than autovacuum_vacuum_cost_limit,\n> > 100. Which are fine.\n\nSo the total wi_cost_limit, 99, is less than autovacuum_vacuum_cost_limit, 100.\n\n> >\n> > Here let's change autovacuum_vacuum_cost_delay/limit value to speed up\n> > running autovacuums.\n> >\n> > Case 1 : increasing autovacuum_vacuum_cost_limit to 1000.\n> >\n> > After reloading the configuration file, vacuum cost-based delay\n> > parameters for each worker become as follows:\n> >\n> > worker->wi_cost_limit_base = 100\n> > worker->wi_cost_limit = 100\n> > worker->wi_cost_delay = 100\n> >\n> > If we rationed autovacuum_vacuum_cost_limit, 1000, to 3 workers, it\n> > would be 333. But since we cap it by wi_cost_limit_base, the\n> > wi_cost_limit is 100. I think this is what Mead reported here.\n>\n>\n> Yes, this is exactly correct.  The cost_limit is capped at the cost_limit that was set during the start of a running vacuum.  My patch changes this cap to be the max allowed cost_limit (10,000).\n\nThe comment of worker's limit calculation says:\n\n        /*\n         * We put a lower bound of 1 on the cost_limit, to avoid division-\n         * by-zero in the vacuum code.  Also, in case of roundoff trouble\n         * in these calculations, let's be sure we don't ever set\n         * cost_limit to more than the base value.\n         */\n        worker->wi_cost_limit = Max(Min(limit,\n                                        worker->wi_cost_limit_base),\n                                    1);\n\nIf we use the max cost_limit as the upper bound here, the worker's\nlimit could unnecessarily be higher than the base value in case of\nroundoff trouble? I think that the problem here is rather that we\ndon't update wi_cost_limit_base and wi_cost_delay when rebalancing the\ncost.Currently, vacuum always limits you to the cost_limit_base from the time that your vacuum started.  I'm not sure why, I don't believe it's rounding related because the rest of the rebalancing code works properly.  ISTM that looking simply allowing the updated cost_limit is a simple solution since the rebalance code will automatically take it into account. \n\nRegards,\n\n--\nMasahiko Sawada\nEDB:  https://www.enterprisedb.com/\n\n\n-- --Scott Meadscott@meads.us\n-- --Scott Meadscott@meads.us", "msg_date": "Fri, 19 Nov 2021 13:36:49 -0500", "msg_from": "Scott Mead <scott@meads.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "+1\nThis is a bug/design which is causing real pain in operations.\nThere are cases the autovacuum worker runs for days and we are left with no\noption other than cancelling it.\n\n+1This is a bug/design which is causing real pain in operations.There are cases the autovacuum worker runs for days and we are left with no option other than cancelling it.", "msg_date": "Mon, 22 Nov 2021 10:06:23 +0530", "msg_from": "Jobin Augustine <jobin.augustine@percona.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "On 2021-Feb-08, Mead, Scott wrote:\n\n> Hello,\n> I recently looked at what it would take to make a running autovacuum\n> pick-up a change to either cost_delay or cost_limit. Users frequently\n> will have a conservative value set, and then wish to change it when\n> autovacuum initiates a freeze on a relation. Most users end up\n> finding out they are in ‘to prevent wraparound’ after it has happened,\n> this means that if they want the vacuum to take advantage of more I/O,\n> they need to stop and then restart the currently running vacuum (after\n> reloading the GUCs).\n\nHello, I think this has been overlooked, right? I can't find a relevant\ncommit, but maybe I just didn't look hard enough. I have a feeling that\nthis is something that we should address. If you still have the cycles,\nplease consider posting an updated patch and creating a commitfest\nentry.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n", "msg_date": "Mon, 23 Jan 2023 18:23:20 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "On Mon, Jan 23, 2023 at 12:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Feb-08, Mead, Scott wrote:\n>\n> > Hello,\n> > I recently looked at what it would take to make a running autovacuum\n> > pick-up a change to either cost_delay or cost_limit. Users frequently\n> > will have a conservative value set, and then wish to change it when\n> > autovacuum initiates a freeze on a relation. Most users end up\n> > finding out they are in ‘to prevent wraparound’ after it has happened,\n> > this means that if they want the vacuum to take advantage of more I/O,\n> > they need to stop and then restart the currently running vacuum (after\n> > reloading the GUCs).\n>\n> Hello, I think this has been overlooked, right? I can't find a relevant\n> commit, but maybe I just didn't look hard enough. I have a feeling that\n> this is something that we should address. If you still have the cycles,\n> please consider posting an updated patch and creating a commitfest\n> entry.\n>\n\nThanks! Yeah, I should be able to get this together next week.\n\n\n>\n> Thanks\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"Someone said that it is at least an order of magnitude more work to do\n> production software than a prototype. I think he is wrong by at least\n> an order of magnitude.\" (Brian Kernighan)\n>\n\n\n-- \n--\nScott Mead\n*scott@meads.us <scott@meads.us>*\n\nOn Mon, Jan 23, 2023 at 12:33 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Feb-08, Mead, Scott wrote:\n\n> Hello,\n>    I recently looked at what it would take to make a running autovacuum\n> pick-up a change to either cost_delay or cost_limit.  Users frequently\n> will have a conservative value set, and then wish to change it when\n> autovacuum initiates a freeze on a relation.  Most users end up\n> finding out they are in ‘to prevent wraparound’ after it has happened,\n> this means that if they want the vacuum to take advantage of more I/O,\n> they need to stop and then restart the currently running vacuum (after\n> reloading the GUCs).\n\nHello, I think this has been overlooked, right?  I can't find a relevant\ncommit, but maybe I just didn't look hard enough.  I have a feeling that\nthis is something that we should address.  If you still have the cycles,\nplease consider posting an updated patch and creating a commitfest\nentry.Thanks!  Yeah, I should be able to get this together next week.   \n\nThanks\n\n-- \nÁlvaro Herrera         PostgreSQL Developer  —  https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\"                              (Brian Kernighan)\n-- --Scott Meadscott@meads.us", "msg_date": "Tue, 31 Jan 2023 10:35:54 -0500", "msg_from": "Scott Mead <scott@meads.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" }, { "msg_contents": "On Mon, Feb 8, 2021 at 9:49 AM Mead, Scott <meads@amazon.com> wrote:\n> Initially, my goal was to determine feasibility for making this dynamic. I added debug code to vacuum.c:vacuum_delay_point(void) and found that changes to cost_delay and cost_limit are already processed by a running vacuum. There was a bug preventing the cost_delay or cost_limit from being configured to allow higher throughput however.\n>\n> I believe this is a bug because currently, autovacuum will dynamically detect and increase the cost_limit or cost_delay, but it can never decrease those values beyond their setting when the vacuum began. The current behavior is for vacuum to limit the maximum throughput of currently running vacuum processes to the cost_limit that was set when the vacuum process began.\n>\n> I changed this (see attached) to allow the cost_limit to be re-calculated up to the maximum allowable (currently 10,000). This has the effect of allowing users to reload a configuration change and an in-progress vacuum can be ‘sped-up’ by setting either the cost_limit or cost_delay.\n>\n> The problematic piece is:\n>\n> diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> index c6ec657a93..d3c6b0d805 100644\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -1834,7 +1834,7 @@ autovac_balance_cost(void)\n> * cost_limit to more than the base value.\n> */\n> worker->wi_cost_limit = Max(Min(limit,\n> - worker->wi_cost_limit_base),\n> + MAXVACUUMCOSTLIMIT),\n> 1);\n> }\n>\n> We limit the worker to the max cost_limit that was set at the beginning of the vacuum.\n\nSo, in do_autovacuum() in the loop through all relations we will be\nvacuuming (around line 2308) (comment says \"perform operations on\ncollected tables\"), we will reload the config file first before\noperating on that table [1]. Any changes you have made to\nautovacuum_vacuum_cost_limit or other GUCs will be read and changed\nhere.\n\nLater in this same loop, table_recheck_autovac() will set\ntab->at_vacuum_cost_limit from vac_cost_limit which is set from the\nautovacuum_vacuum_cost_limit or vacuum_cost_limit and will pick up your\nrefreshed value.\n\nThen a bit further down, (before autovac_balance_cost()),\nMyWorkerInfo->wi_cost_limit_base is set from tab->at_vacuum_cost_limit.\n\nIn autovac_balance_cost(), when we loop through the running workers to\ncalculate the worker->wi_cost_limit, workers who have reloaded the\nconfig file in the do_autovacuum() loop prior to our taking the\nAutovacuumLock will have the new version of autovacuum_vacuum_cost_limit\nin their wi_cost_limit_base.\n\nIf you saw an old value in the DEBUG log output, that could be\nbecause it was for a worker who has not yet reloaded the config file.\n(the launcher also calls autovac_balance_cost(), but I will\nassume we are just talking about the workers here).\n\nNote that this will only pick up changes between tables being\nautovacuumed. If you want to see updates to the value in the middle of\nautovacuum vacuuming a table, then we would need to reload the\nconfiguration file more often than just between tables.\n\nI have started a discussion about doing this in [2]. I made it a\nseparate thread because my proposed changes would have effects outside\nof autovacuum. Processing the config file reload in vacuum_delay_point()\nwould affect vacuum and analyze (i.e. not just autovacuum). Explicit\nvacuum and analyze rely on the per statement config reload in\nPostgresMain().\n\n> Interestingly, autovac_balance_cost(void) is only updating the cost_limit, even if the cost_delay is modified. This is done correctly, it was just a surprise to see the behavior.\n\nIf this was during vacuuming of a single table, this is expected for the\nsame reason described above.\n\n- Melanie\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/postmaster/autovacuum.c#L2324\n[2] https://www.postgresql.org/message-id/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 23 Feb 2023 17:22:14 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Autovacuum not dynamically decreasing cost_limit and\n cost_delay" } ]
[ { "msg_contents": "Hi,\n\nWith [0] we got COPY progress reporting. Before the column names of\nthis newly added view are effectively set in stone with the release of\npg14, I propose the following set of relatively small patches. These\nare v2, because it is a patchset that is based on a set of patches\nthat I previously posted in [0].\n\n0001 Adds a column to pg_stat_progress_copy which details the amount\nof tuples that were excluded from insertion by the WHERE clause of the\nCOPY FROM command.\n\n0002 alters pg_stat_progress_copy to use 'tuple'-terminology instead\nof 'line'-terminology. 'Line' doesn't make sense in the binary copy\ncase, and only for the 'text' copy format there can be a guarantee\nthat the source / output file actually contains the reported amount of\nlines, whereas the amount of data tuples (which is also what it's\ncalled internally) is guaranteed to equal for all data types.\n\nThere was some discussion about this in [0] where the author thought\n'line' is more consistent with the CSV documentation, and where I\nargued that 'tuple' is both more consistent with the rest of the\nprogress reporting tables and more consistent with the actual counted\nitems: these are the tuples serialized / inserted (as noted in the CSV\ndocs; \"Thus the files are not strictly one line per table row like\ntext-format files.\").\n\nPatch 0003 adds backlinks to the progress reporting docs from the docs\nof the commands that have progress reporting (re/index, cluster,\nvacuum, etc.) such that progress reporting is better discoverable from\nthe relevant commands, and removes the datname column from the\nprogress_copy view (that column was never committed). This too should\nbe fairly trivial and uncontroversial.\n\n0004 adds the 'command' column to the progress_copy view; which\ndistinguishes between COPY FROM and COPY TO. The two commands are (in\nmy opinion) significantly different enough to warrant this column;\nsimilar to the difference between CREATE INDEX/REINDEX [CONCURRENTLY]\nwhich also report that information. I believe that this change is\nappropriate; as the semantics of the columns change depending on the\ncommand being executed.\n\nLastly, 0005 adds 'io_target' to the reported information, that is,\nFILE, PROGRAM, STDIO or CALLBACK. Although this can relatively easily\nbe determined based on the commands in pg_stat_activity, it is\nreasonably something that a user would want to query on, as the\norigin/target of COPY has security and performance implications,\nwhereas other options (e.g. format) are less interesting for clients\nthat are not executing that specific COPY command.\n\nOf special interest in 0005 is that it reports the io_target for the\nlogical replications' initial tablesyncs' internal COPY. This would\notherwise be measured, but no knowledge about the type of copy (or its\norigin) would be available on the worker's side. I'm not married to\nthis patch 0005, but I believe it could be useful, and therefore\nincluded it in the patchset.\n\n\nWith regards,\n\nMatthias van de Meent.\n\n\n[0] https://www.postgresql.org/message-id/flat/CAFp7Qwr6_FmRM6pCO0x_a0mymOfX_Gg%2BFEKet4XaTGSW%3DLitKQ%40mail.gmail.com", "msg_date": "Mon, 8 Feb 2021 19:35:45 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Improvements and additions to COPY progress reporting" }, { "msg_contents": "po 8. 2. 2021 v 19:35 odesílatel Matthias van de Meent\n<boekewurm+postgres@gmail.com> napsal:\n>\n> Hi,\n>\n> With [0] we got COPY progress reporting. Before the column names of\n> this newly added view are effectively set in stone with the release of\n> pg14, I propose the following set of relatively small patches. These\n> are v2, because it is a patchset that is based on a set of patches\n> that I previously posted in [0].\n\nHello. I had this in my backlog to revisit this feature as well before\nthe release. Thanks for picking this up.\n\n> 0001 Adds a column to pg_stat_progress_copy which details the amount\n> of tuples that were excluded from insertion by the WHERE clause of the\n> COPY FROM command.\n>\n> 0002 alters pg_stat_progress_copy to use 'tuple'-terminology instead\n> of 'line'-terminology. 'Line' doesn't make sense in the binary copy\n> case, and only for the 'text' copy format there can be a guarantee\n> that the source / output file actually contains the reported amount of\n> lines, whereas the amount of data tuples (which is also what it's\n> called internally) is guaranteed to equal for all data types.\n>\n> There was some discussion about this in [0] where the author thought\n> 'line' is more consistent with the CSV documentation, and where I\n> argued that 'tuple' is both more consistent with the rest of the\n> progress reporting tables and more consistent with the actual counted\n> items: these are the tuples serialized / inserted (as noted in the CSV\n> docs; \"Thus the files are not strictly one line per table row like\n> text-format files.\").\n\nAs an mentioned author I have no preference over line or tuple\nterminology here. For some cases \"line\" terminology fits better, for\nsome \"tuple\" one. Docs can be improved later if needed to make it\nclear at some cases (for example in most common case probably - CSV\nimport/export) tuple equals one line in CSV.\n\n> Patch 0003 adds backlinks to the progress reporting docs from the docs\n> of the commands that have progress reporting (re/index, cluster,\n> vacuum, etc.) such that progress reporting is better discoverable from\n> the relevant commands, and removes the datname column from the\n> progress_copy view (that column was never committed). This too should\n> be fairly trivial and uncontroversial.\n>\n> 0004 adds the 'command' column to the progress_copy view; which\n> distinguishes between COPY FROM and COPY TO. The two commands are (in\n> my opinion) significantly different enough to warrant this column;\n> similar to the difference between CREATE INDEX/REINDEX [CONCURRENTLY]\n> which also report that information. I believe that this change is\n> appropriate; as the semantics of the columns change depending on the\n> command being executed.\n\nThis was part of my initial patch as well, but I decided to strip it\nout to make the final patch as small as possible to make it quickly\nmergeable without need of further discussion. From my side this is\nuseful to have directly in the progress report as well.\n\n> Lastly, 0005 adds 'io_target' to the reported information, that is,\n> FILE, PROGRAM, STDIO or CALLBACK. Although this can relatively easily\n> be determined based on the commands in pg_stat_activity, it is\n> reasonably something that a user would want to query on, as the\n> origin/target of COPY has security and performance implications,\n> whereas other options (e.g. format) are less interesting for clients\n> that are not executing that specific COPY command.\n\nSimilar (simplified, not supporting CALLBACK) info was also part of\nthe initial patch and stripped out later. I'm also +1 on this info\nbeing useful to have directly in the progress report.\n\n> Of special interest in 0005 is that it reports the io_target for the\n> logical replications' initial tablesyncs' internal COPY. This would\n> otherwise be measured, but no knowledge about the type of copy (or its\n> origin) would be available on the worker's side. I'm not married to\n> this patch 0005, but I believe it could be useful, and therefore\n> included it in the patchset.\n\nAll patches seem good to me. I was able to apply them to current clean\nmaster and \"make check\" has succeeded without problems.\n\n>\n> With regards,\n>\n> Matthias van de Meent.\n>\n>\n> [0] https://www.postgresql.org/message-id/flat/CAFp7Qwr6_FmRM6pCO0x_a0mymOfX_Gg%2BFEKet4XaTGSW%3DLitKQ%40mail.gmail.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 08:02:55 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, Feb 9, 2021 at 12:06 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> With [0] we got COPY progress reporting. Before the column names of\n> this newly added view are effectively set in stone with the release of\n> pg14, I propose the following set of relatively small patches. These\n> are v2, because it is a patchset that is based on a set of patches\n> that I previously posted in [0].\n\nThanks for working on the patches. Here are some comments:\n\n0001 - +1 to add tuples_excluded and the patch LGTM.\n\n0002 - Yes, the tuples_processed or tuples_excluded makes more sense\nto me than lines_processed and lines_excluded. The patch LGTM.\n\n0003 - Instead of just adding the progress reporting to \"See also\"\nsections in the footer of the respective pages analyze, cluster and\nothers, it would be nice if we have a mention of it in the description\nas pg_basebackup has something like below:\n <para>\n Whenever <application>pg_basebackup</application> is taking a base\n backup, the server's <structname>pg_stat_progress_basebackup</structname>\n view will report the progress of the backup.\n See <xref linkend=\"basebackup-progress-reporting\"/> for details.\n\n0004 -\n1) How about PROGRESS_COPY_COMMAND_TYPE instead of\nPROGRESS_COPY_COMMAND? The names looks bit confusing with the existing\nPROGRESS_COMMAND_COPY.\n\n0005 -\n1) How about\n+ or <literal>CALLBACK</literal> (used in the table\nsynchronization background\n+ worker).\ninstead of\n+ or <literal>CALLBACK</literal> (used in the tablesync background\n+ worker).\nBecause \"table synchronization\" is being used in logical-replication.sgml.\n\n2) I think cstate->copy_src = COPY_CALLBACK is assigned after the\nswitch case added in copyfrom.c\n if (data_source_cb)\n {\n cstate->copy_src = COPY_CALLBACK;\n cstate->data_source_cb = data_source_cb;\n }\n\nAlso, you can add this to the current commitfest.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 12:42:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "po 8. 2. 2021 v 19:35 odesílatel Matthias van de Meent\n<boekewurm+postgres@gmail.com> napsal:\n>\n> Hi,\n>\n> With [0] we got COPY progress reporting. Before the column names of\n> this newly added view are effectively set in stone with the release of\n> pg14, I propose the following set of relatively small patches. These\n> are v2, because it is a patchset that is based on a set of patches\n> that I previously posted in [0].\n>\n> 0001 Adds a column to pg_stat_progress_copy which details the amount\n> of tuples that were excluded from insertion by the WHERE clause of the\n> COPY FROM command.\n>\n> 0002 alters pg_stat_progress_copy to use 'tuple'-terminology instead\n> of 'line'-terminology. 'Line' doesn't make sense in the binary copy\n> case, and only for the 'text' copy format there can be a guarantee\n> that the source / output file actually contains the reported amount of\n> lines, whereas the amount of data tuples (which is also what it's\n> called internally) is guaranteed to equal for all data types.\n>\n> There was some discussion about this in [0] where the author thought\n> 'line' is more consistent with the CSV documentation, and where I\n> argued that 'tuple' is both more consistent with the rest of the\n> progress reporting tables and more consistent with the actual counted\n> items: these are the tuples serialized / inserted (as noted in the CSV\n> docs; \"Thus the files are not strictly one line per table row like\n> text-format files.\").\n>\n> Patch 0003 adds backlinks to the progress reporting docs from the docs\n> of the commands that have progress reporting (re/index, cluster,\n> vacuum, etc.) such that progress reporting is better discoverable from\n> the relevant commands, and removes the datname column from the\n> progress_copy view (that column was never committed). This too should\n> be fairly trivial and uncontroversial.\n>\n> 0004 adds the 'command' column to the progress_copy view; which\n> distinguishes between COPY FROM and COPY TO. The two commands are (in\n> my opinion) significantly different enough to warrant this column;\n> similar to the difference between CREATE INDEX/REINDEX [CONCURRENTLY]\n> which also report that information. I believe that this change is\n> appropriate; as the semantics of the columns change depending on the\n> command being executed.\n>\n> Lastly, 0005 adds 'io_target' to the reported information, that is,\n> FILE, PROGRAM, STDIO or CALLBACK. Although this can relatively easily\n> be determined based on the commands in pg_stat_activity, it is\n> reasonably something that a user would want to query on, as the\n> origin/target of COPY has security and performance implications,\n> whereas other options (e.g. format) are less interesting for clients\n> that are not executing that specific COPY command.\n\nI took a little deeper look and I'm not sure if I understand FILE and\nSTDIO. I have finally tried to finalize some initial regress testing\nof COPY command progress using triggers. I have attached the initial\npatch applicable to your changes. As you can see COPY FROM STDIN is\nreported as FILE. That's probably expected, but it is a little\nconfusing for me since STDIN and STDIO sound similar. What is the\npurpose of STDIO? When is the COPY command reported with io_target of\nSTDIO?\n\n> Of special interest in 0005 is that it reports the io_target for the\n> logical replications' initial tablesyncs' internal COPY. This would\n> otherwise be measured, but no knowledge about the type of copy (or its\n> origin) would be available on the worker's side. I'm not married to\n> this patch 0005, but I believe it could be useful, and therefore\n> included it in the patchset.\n>\n>\n> With regards,\n>\n> Matthias van de Meent.\n>\n>\n> [0] https://www.postgresql.org/message-id/flat/CAFp7Qwr6_FmRM6pCO0x_a0mymOfX_Gg%2BFEKet4XaTGSW%3DLitKQ%40mail.gmail.com", "msg_date": "Tue, 9 Feb 2021 09:32:48 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, 9 Feb 2021 at 09:32, Josef Šimánek <josef.simanek@gmail.com> wrote:\n>\n> po 8. 2. 2021 v 19:35 odesílatel Matthias van de Meent\n> <boekewurm+postgres@gmail.com> napsal:\n> > Lastly, 0005 adds 'io_target' to the reported information, that is,\n> > FILE, PROGRAM, STDIO or CALLBACK. Although this can relatively easily\n> > be determined based on the commands in pg_stat_activity, it is\n> > reasonably something that a user would want to query on, as the\n> > origin/target of COPY has security and performance implications,\n> > whereas other options (e.g. format) are less interesting for clients\n> > that are not executing that specific COPY command.\n>\n> I took a little deeper look and I'm not sure if I understand FILE and\n> STDIO. I have finally tried to finalize some initial regress testing\n> of COPY command progress using triggers. I have attached the initial\n> patch applicable to your changes. As you can see COPY FROM STDIN is\n> reported as FILE. That's probably expected, but it is a little\n> confusing for me since STDIN and STDIO sound similar. What is the\n> purpose of STDIO? When is the COPY command reported with io_target of\n> STDIO?\n\nI checked for the type of the copy_src before it was correctly set,\ntherefore only reporting FILE type, but this will be fixed shortly in\nv3.\n\nMatthias\n\n\n", "msg_date": "Tue, 9 Feb 2021 12:51:20 +0100", "msg_from": "0010203112132233 <boekewurm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "út 9. 2. 2021 v 12:51 odesílatel 0010203112132233 <boekewurm@gmail.com> napsal:\n>\n> On Tue, 9 Feb 2021 at 09:32, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> >\n> > po 8. 2. 2021 v 19:35 odesílatel Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> napsal:\n> > > Lastly, 0005 adds 'io_target' to the reported information, that is,\n> > > FILE, PROGRAM, STDIO or CALLBACK. Although this can relatively easily\n> > > be determined based on the commands in pg_stat_activity, it is\n> > > reasonably something that a user would want to query on, as the\n> > > origin/target of COPY has security and performance implications,\n> > > whereas other options (e.g. format) are less interesting for clients\n> > > that are not executing that specific COPY command.\n> >\n> > I took a little deeper look and I'm not sure if I understand FILE and\n> > STDIO. I have finally tried to finalize some initial regress testing\n> > of COPY command progress using triggers. I have attached the initial\n> > patch applicable to your changes. As you can see COPY FROM STDIN is\n> > reported as FILE. That's probably expected, but it is a little\n> > confusing for me since STDIN and STDIO sound similar. What is the\n> > purpose of STDIO? When is the COPY command reported with io_target of\n> > STDIO?\n>\n> I checked for the type of the copy_src before it was correctly set,\n> therefore only reporting FILE type, but this will be fixed shortly in\n> v3.\n\nOK, would you mind to integrate my regression test initial patch as\nwell in v3 or should I submit it later in a separate way?\n\n> Matthias\n\n\n", "msg_date": "Tue, 9 Feb 2021 12:53:44 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, 9 Feb 2021 at 08:12, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 12:06 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > With [0] we got COPY progress reporting. Before the column names of\n> > this newly added view are effectively set in stone with the release of\n> > pg14, I propose the following set of relatively small patches. These\n> > are v2, because it is a patchset that is based on a set of patches\n> > that I previously posted in [0].\n>\n> Thanks for working on the patches. Here are some comments:\n>\n> 0001 - +1 to add tuples_excluded and the patch LGTM.\n>\n> 0002 - Yes, the tuples_processed or tuples_excluded makes more sense\n> to me than lines_processed and lines_excluded. The patch LGTM.\n>\n> 0003 - Instead of just adding the progress reporting to \"See also\"\n> sections in the footer of the respective pages analyze, cluster and\n> others, it would be nice if we have a mention of it in the description\n> as pg_basebackup has something like below:\n> <para>\n> Whenever <application>pg_basebackup</application> is taking a base\n> backup, the server's <structname>pg_stat_progress_basebackup</structname>\n> view will report the progress of the backup.\n> See <xref linkend=\"basebackup-progress-reporting\"/> for details.\n\nAdded\n\n> 0004 -\n> 1) How about PROGRESS_COPY_COMMAND_TYPE instead of\n> PROGRESS_COPY_COMMAND? The names looks bit confusing with the existing\n> PROGRESS_COMMAND_COPY.\n\nThe current name is consistent with the naming of the other\ncommand-reporting progress views; CREATEIDX and CLUSTER both use the\n*_COMMAND as this column indexes' internal name.\n\n> 0005 -\n> 1) How about\n> + or <literal>CALLBACK</literal> (used in the table\n> synchronization background\n> + worker).\n> instead of\n> + or <literal>CALLBACK</literal> (used in the tablesync background\n> + worker).\n> Because \"table synchronization\" is being used in logical-replication.sgml.\n\nFixed\n\n> 2) I think cstate->copy_src = COPY_CALLBACK is assigned after the\n> switch case added in copyfrom.c\n> if (data_source_cb)\n> {\n> cstate->copy_src = COPY_CALLBACK;\n> cstate->data_source_cb = data_source_cb;\n> }\n\nYes, I noticed this too while working on the patchset, but apparently\ndidn't act on this... Fixed in attachted version.\n\n> Also, you can add this to the current commitfest.\n\nSee https://commitfest.postgresql.org/32/2977/\n\nOn Tue, 9 Feb 2021 at 12:53, Josef Šimánek <josef.simanek@gmail.com> wrote:\n>\n> OK, would you mind to integrate my regression test initial patch as\n> well in v3 or should I submit it later in a separate way?\n\nAttached, with minor fixes\n\n\nWith regards,\n\nMatthias van de Meent", "msg_date": "Tue, 9 Feb 2021 13:32:33 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, Feb 9, 2021 at 6:02 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > Also, you can add this to the current commitfest.\n>\n> See https://commitfest.postgresql.org/32/2977/\n>\n> On Tue, 9 Feb 2021 at 12:53, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> >\n> > OK, would you mind to integrate my regression test initial patch as\n> > well in v3 or should I submit it later in a separate way?\n>\n> Attached, with minor fixes\n\nWhy do we need to have a new test file progress.sql for the test\ncases? Can't we add them into existing copy.sql or copy2.sql? Or do\nyou have a plan to add test cases into progress.sql for other progress\nreporting commands?\n\nIMO, it's better not add any new test file but add the tests to existing files.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Feb 2021 12:13:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Wed, 10 Feb 2021 at 07:43, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Feb 9, 2021 at 6:02 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > Also, you can add this to the current commitfest.\n> >\n> > See https://commitfest.postgresql.org/32/2977/\n> >\n> > On Tue, 9 Feb 2021 at 12:53, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> > >\n> > > OK, would you mind to integrate my regression test initial patch as\n> > > well in v3 or should I submit it later in a separate way?\n> >\n> > Attached, with minor fixes\n>\n> Why do we need to have a new test file progress.sql for the test\n> cases? Can't we add them into existing copy.sql or copy2.sql? Or do\n> you have a plan to add test cases into progress.sql for other progress\n> reporting commands?\n\nI don't mind moving the test into copy or copy2, but the main reason\nto put it in a seperate file is to test the 'copy' component of the\nfeature called 'progress reporting'. If the feature instead is 'copy'\nand 'progress reporting' is part of that feature, then I'd put it in\nthe copy-tests, but because the documentation of this has it's own\ndocs page 'progress reporting', and because 'copy' is a subsection of\nthat, I do think that this feature warrants its own regression test\nfile.\n\nThere are no other tests for the progress reporting feature yet,\nbecause COPY ... FROM is the only command that is progress reported\n_and_ that can fire triggers while running the command, so checking\nthe progress view during the progress reported command is only\nfeasable in COPY progress reporting. To test the other progress\nreporting views, we would need multiple sessions, which I believe is\nimpossible in this test format. Please correct me if I'm wrong; I'd\nlove to add tests for the other components. That will not be in this\npatchset, though.\n\n> IMO, it's better not add any new test file but add the tests to existing files.\n\nIn general I agree, but in some cases (e.g. new system component, new\nfull-fledged feature), new test files are needed. I think that this\ncould be one of those cases.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 11 Feb 2021 15:27:15 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "čt 11. 2. 2021 v 15:27 odesílatel Matthias van de Meent\n<boekewurm+postgres@gmail.com> napsal:\n>\n> On Wed, 10 Feb 2021 at 07:43, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Feb 9, 2021 at 6:02 PM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > > Also, you can add this to the current commitfest.\n> > >\n> > > See https://commitfest.postgresql.org/32/2977/\n> > >\n> > > On Tue, 9 Feb 2021 at 12:53, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> > > >\n> > > > OK, would you mind to integrate my regression test initial patch as\n> > > > well in v3 or should I submit it later in a separate way?\n> > >\n> > > Attached, with minor fixes\n> >\n> > Why do we need to have a new test file progress.sql for the test\n> > cases? Can't we add them into existing copy.sql or copy2.sql? Or do\n> > you have a plan to add test cases into progress.sql for other progress\n> > reporting commands?\n>\n> I don't mind moving the test into copy or copy2, but the main reason\n> to put it in a seperate file is to test the 'copy' component of the\n> feature called 'progress reporting'. If the feature instead is 'copy'\n> and 'progress reporting' is part of that feature, then I'd put it in\n> the copy-tests, but because the documentation of this has it's own\n> docs page 'progress reporting', and because 'copy' is a subsection of\n> that, I do think that this feature warrants its own regression test\n> file.\n>\n> There are no other tests for the progress reporting feature yet,\n> because COPY ... FROM is the only command that is progress reported\n> _and_ that can fire triggers while running the command, so checking\n> the progress view during the progress reported command is only\n> feasable in COPY progress reporting. To test the other progress\n> reporting views, we would need multiple sessions, which I believe is\n> impossible in this test format. Please correct me if I'm wrong; I'd\n> love to add tests for the other components. That will not be in this\n> patchset, though.\n>\n> > IMO, it's better not add any new test file but add the tests to existing files.\n>\n> In general I agree, but in some cases (e.g. new system component, new\n> full-fledged feature), new test files are needed. I think that this\n> could be one of those cases.\n\nI have split it since it should be the start of progress reporting\ntesting at all. If you better consider this as part of COPY testing,\nfeel free to move it to already existing copy testing related files.\nThere's no real reason to keep it separated if not needed.\n\n>\n> With regards,\n>\n> Matthias van de Meent\n\n\n", "msg_date": "Thu, 11 Feb 2021 15:38:42 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, Feb 11, 2021, 8:08 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n\n> čt 11. 2. 2021 v 15:27 odesílatel Matthias van de Meent\n> <boekewurm+postgres@gmail.com> napsal:\n> >\n> > On Wed, 10 Feb 2021 at 07:43, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Tue, Feb 9, 2021 at 6:02 PM Matthias van de Meent\n> > > <boekewurm+postgres@gmail.com> wrote:\n> > > > > Also, you can add this to the current commitfest.\n> > > >\n> > > > See https://commitfest.postgresql.org/32/2977/\n> > > >\n> > > > On Tue, 9 Feb 2021 at 12:53, Josef Šimánek <josef.simanek@gmail.com>\n> wrote:\n> > > > >\n> > > > > OK, would you mind to integrate my regression test initial patch as\n> > > > > well in v3 or should I submit it later in a separate way?\n> > > >\n> > > > Attached, with minor fixes\n> > >\n> > > Why do we need to have a new test file progress.sql for the test\n> > > cases? Can't we add them into existing copy.sql or copy2.sql? Or do\n> > > you have a plan to add test cases into progress.sql for other progress\n> > > reporting commands?\n> >\n> > I don't mind moving the test into copy or copy2, but the main reason\n> > to put it in a seperate file is to test the 'copy' component of the\n> > feature called 'progress reporting'. If the feature instead is 'copy'\n> > and 'progress reporting' is part of that feature, then I'd put it in\n> > the copy-tests, but because the documentation of this has it's own\n> > docs page 'progress reporting', and because 'copy' is a subsection of\n> > that, I do think that this feature warrants its own regression test\n> > file.\n> >\n> > There are no other tests for the progress reporting feature yet,\n> > because COPY ... FROM is the only command that is progress reported\n> > _and_ that can fire triggers while running the command, so checking\n> > the progress view during the progress reported command is only\n> > feasable in COPY progress reporting. To test the other progress\n> > reporting views, we would need multiple sessions, which I believe is\n> > impossible in this test format. Please correct me if I'm wrong; I'd\n> > love to add tests for the other components. That will not be in this\n> > patchset, though.\n> >\n> > > IMO, it's better not add any new test file but add the tests to\n> existing files.\n> >\n> > In general I agree, but in some cases (e.g. new system component, new\n> > full-fledged feature), new test files are needed. I think that this\n> > could be one of those cases.\n>\n> I have split it since it should be the start of progress reporting\n> testing at all. If you better consider this as part of COPY testing,\n> feel free to move it to already existing copy testing related files.\n> There's no real reason to keep it separated if not needed.\n>\n\n+1 to move those test cases to existing copy test files.\n\nOn Thu, Feb 11, 2021, 8:08 PM Josef Šimánek <josef.simanek@gmail.com> wrote:čt 11. 2. 2021 v 15:27 odesílatel Matthias van de Meent\n<boekewurm+postgres@gmail.com> napsal:\n>\n> On Wed, 10 Feb 2021 at 07:43, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Feb 9, 2021 at 6:02 PM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > > Also, you can add this to the current commitfest.\n> > >\n> > > See https://commitfest.postgresql.org/32/2977/\n> > >\n> > > On Tue, 9 Feb 2021 at 12:53, Josef Šimánek <josef.simanek@gmail.com> wrote:\n> > > >\n> > > > OK, would you mind to integrate my regression test initial patch as\n> > > > well in v3 or should I submit it later in a separate way?\n> > >\n> > > Attached, with minor fixes\n> >\n> > Why do we need to have a new test file progress.sql for the test\n> > cases? Can't we add them into existing copy.sql or copy2.sql? Or do\n> > you have a plan to add test cases into progress.sql for other progress\n> > reporting commands?\n>\n> I don't mind moving the test into copy or copy2, but the main reason\n> to put it in a seperate file is to test the 'copy' component of the\n> feature called 'progress reporting'. If the feature instead is 'copy'\n> and 'progress reporting' is part of that feature, then I'd put it in\n> the copy-tests, but because the documentation of this has it's own\n> docs page  'progress reporting', and because 'copy' is a subsection of\n> that, I do think that this feature warrants its own regression test\n> file.\n>\n> There are no other tests for the progress reporting feature yet,\n> because COPY ... FROM is the only command that is progress reported\n> _and_ that can fire triggers while running the command, so checking\n> the progress view during the progress reported command is only\n> feasable in COPY progress reporting. To test the other progress\n> reporting views, we would need multiple sessions, which I believe is\n> impossible in this test format. Please correct me if I'm wrong; I'd\n> love to add tests for the other components. That will not be in this\n> patchset, though.\n>\n> > IMO, it's better not add any new test file but add the tests to existing files.\n>\n> In general I agree, but in some cases (e.g. new system component, new\n> full-fledged feature), new test files are needed. I think that this\n> could be one of those cases.\n\nI have split it since it should be the start of progress reporting\ntesting at all. If you better consider this as part of COPY testing,\nfeel free to move it to already existing copy testing related files.\nThere's no real reason to keep it separated if not needed.+1 to move those test cases to existing copy test files.", "msg_date": "Thu, 11 Feb 2021 20:14:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, 11 Feb 2021 at 15:44, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n>\n> On Thu, Feb 11, 2021, 8:08 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n>> I have split it since it should be the start of progress reporting\n>> testing at all. If you better consider this as part of COPY testing,\n>> feel free to move it to already existing copy testing related files.\n>> There's no real reason to keep it separated if not needed.\n>\n>\n> +1 to move those test cases to existing copy test files.\n\nThanks for your reviews. PFA v4 of the patchset, in which the tests\nare put into copy.sql (well, input/copy.source). This also adds tests\nfor correctly reporting COPY ... FROM 'file'.\n\nI've changed the notice-alerted format from manually naming each\ncolumn to calling to_jsonb and removing the unstable columns from the\nreported value; this should therefore be stable and give direct notice\nto changes in the view.\n\nWith regards,\n\nMatthias van de Meent.", "msg_date": "Fri, 12 Feb 2021 12:23:34 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Fri, 12 Feb 2021 at 12:23, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 11 Feb 2021 at 15:44, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> >\n> > On Thu, Feb 11, 2021, 8:08 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n> >> I have split it since it should be the start of progress reporting\n> >> testing at all. If you better consider this as part of COPY testing,\n> >> feel free to move it to already existing copy testing related files.\n> >> There's no real reason to keep it separated if not needed.\n> >\n> >\n> > +1 to move those test cases to existing copy test files.\n>\n> Thanks for your reviews. PFA v4 of the patchset, in which the tests\n> are put into copy.sql (well, input/copy.source). This also adds tests\n> for correctly reporting COPY ... FROM 'file'.\n\nPFA v5, which fixes a failure in the pg_upgrade regression tests due\nto incorrect usage of @abs_builddir@. I had the changes staged, but\nforgot to add them to the patches.\n\nSorry for the noise.\n\n-Matthias", "msg_date": "Fri, 12 Feb 2021 13:10:46 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Fri, Feb 12, 2021 at 5:40 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Fri, 12 Feb 2021 at 12:23, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Thu, 11 Feb 2021 at 15:44, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > >\n> > > On Thu, Feb 11, 2021, 8:08 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n> > >> I have split it since it should be the start of progress reporting\n> > >> testing at all. If you better consider this as part of COPY testing,\n> > >> feel free to move it to already existing copy testing related files.\n> > >> There's no real reason to keep it separated if not needed.\n> > >\n> > >\n> > > +1 to move those test cases to existing copy test files.\n> >\n> > Thanks for your reviews. PFA v4 of the patchset, in which the tests\n> > are put into copy.sql (well, input/copy.source). This also adds tests\n> > for correctly reporting COPY ... FROM 'file'.\n>\n> PFA v5, which fixes a failure in the pg_upgrade regression tests due\n> to incorrect usage of @abs_builddir@. I had the changes staged, but\n> forgot to add them to the patches.\n>\n> Sorry for the noise.\n\nLooks like the patch 0001 that was adding tuples_excluded was missing\nand cfbot is also not happy with the v5 patch set.\n\nMaybe, we may not need 6 patches as they are relatively very small\npatches. IMO, the following are enough:\n\n0001 - tuples_excluded, lines to tuples change, COPY FROM/COPY TO\naddition, io_target -- basically all the code related patches can go\ninto 0001\n0002 - documentation\n0003 - tests - I think we can only have a simple test(in copy2.sql)\nshowing stdin/stdout and not have file related tests. Because these\npatches work as expected, please find my testing below:\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_target |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+-----------+-----------------+-------------+------------------+-----------------\n 2886103 | 12977 | postgres | 16384 | COPY FROM | FILE |\n83099648 | 85777795 | 9553999 | 1111111\n(1 row)\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_target |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+-----------+-----------------+-------------+------------------+-----------------\n 2886103 | 12977 | postgres | 16384 | COPY FROM | STDIO |\n 0 | 0 | 0 | 0\n(1 row)\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_target |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+---------+-----------+-----------------+-------------+------------------+-----------------\n 2886103 | 12977 | postgres | 16384 | COPY TO | FILE |\n37771610 | 0 | 4999228 | 0\n(1 row)\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_target |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+-----------+-----------------+-------------+------------------+-----------------\n 2892816 | 12977 | postgres | 16384 | COPY FROM | CALLBACK |\n249777823 | 0 | 31888892 | 0\n(1 row)\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Feb 2021 18:10:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Fri, 12 Feb 2021 at 13:40, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 5:40 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Fri, 12 Feb 2021 at 12:23, Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > >\n> > > On Thu, 11 Feb 2021 at 15:44, Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > >\n> > > > On Thu, Feb 11, 2021, 8:08 PM Josef Šimánek <josef.simanek@gmail.com> wrote:\n> > > >> I have split it since it should be the start of progress reporting\n> > > >> testing at all. If you better consider this as part of COPY testing,\n> > > >> feel free to move it to already existing copy testing related files.\n> > > >> There's no real reason to keep it separated if not needed.\n> > > >\n> > > >\n> > > > +1 to move those test cases to existing copy test files.\n> > >\n> > > Thanks for your reviews. PFA v4 of the patchset, in which the tests\n> > > are put into copy.sql (well, input/copy.source). This also adds tests\n> > > for correctly reporting COPY ... FROM 'file'.\n> >\n> > PFA v5, which fixes a failure in the pg_upgrade regression tests due\n> > to incorrect usage of @abs_builddir@. I had the changes staged, but\n> > forgot to add them to the patches.\n> >\n> > Sorry for the noise.\n>\n> Looks like the patch 0001 that was adding tuples_excluded was missing\n> and cfbot is also not happy with the v5 patch set.\n>\n> Maybe, we may not need 6 patches as they are relatively very small\n> patches. IMO, the following are enough:\n>\n> 0001 - tuples_excluded, lines to tuples change, COPY FROM/COPY TO\n> addition, io_target -- basically all the code related patches can go\n> into 0001\n> 0002 - documentation\n> 0003 - tests - I think we can only have a simple test(in copy2.sql)\n> showing stdin/stdout and not have file related tests. Because these\n> patches work as expected, please find my testing below:\n\nI agree with that split, the current split was mainly for the reason\nthat some of the patches (except 1, 3 and 6, which are quite\nsubstantially different from the rest) each have had their seperate\nconcerns voiced about the changes contained in that patch (be it\ndirect or indirect); e.g. the renaming of lines_* to tuples_* is in my\nopionion a good thing, and Josef disagrees.\n\nAnyway, please find attached patchset v6 applying that split.\n\nRegarding only a simple test: I believe it is useful to have at least\na test that distinguishes between two different import types. I've\nmade a mistake before, so I think it is useful to add a regression\ntests to prevent someone else from making this same mistake (trivial\nas it may be). Additionally, testing in copy.sql also allows for\nvalidating the bytes_total column, which cannot be tested in copy2.sql\ndue to the lack of COPY FROM FILE -support over there. I'm +0.5 on\nkeeping it as-is in copy.sql, so unless someone has some strong\nfeelings about this, I'd like to keep it in copy.sql.\n\nWith regards,\n\nMatthias van de Meent.", "msg_date": "Fri, 12 Feb 2021 18:47:06 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "--- a/doc/src/sgml/ref/analyze.sgml \n+++ b/doc/src/sgml/ref/analyze.sgml \n@@ -273,6 +273,12 @@ ANALYZE [ VERBOSE ] [ <replaceable class=\"parameter\">table_and_columns</replacea \n will not record new statistics for that table. Any existing statistics \n will be retained. \n </para> \n+ \n+ <para> \n+ Each backend running the <command>ANALYZE</command> command will report their \n+ progress to the <structname>pg_stat_progress_analyze</structname> view. \n+ See <xref linkend=\"analyze-progress-reporting\"/> for details. \n+ </para> \n\nI think this should say:\n\n\"..will report its progress to..\"\n\nOr:\n\n\"The progress of each backend running >ANALYZE< is reported in the\n>pg_stat_progress_analyze< view.\"\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Feb 2021 18:37:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "Hi,\n\nI agree with these changes in general - I have a couple minor comment:\n\n1) 0001\n\n- the SGML docs are missing a couple tags\n\n- The blocks in copyfrom.cc/copyto.c should be reworked - I don't think\nwe do this in our codebase. Move the variable declarations to the\nbeginning, get rid of the out block. Or something like that.\n\n- I fir the \"io_target\" name misleading, because in some cases it's\nactually the *source*.\n\n\n2) 0002\n\n- I believe \"each backend ... reports its\" (not theirs), right?\n\n- This seems more like a generic docs improvement, not quite specific to\nthe COPY progress patch. It's a bit buried, maybe it should be posted\nseparately. OTOH it's pretty small.\n\n\n3) 0003\n\n- Some whitespace noise, triggering \"git am\" warnings.\n\n- Might be good to briefly explain what the regression test does with\nthe triggers, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 15 Feb 2021 17:07:11 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "Hi,\n\nThank you all for the suggestions. PFA version 8 of the patchset, in\nwhich I have applied most of your comments. Unless explicitly named\nbelow, I have applied the suggestions.\n\n\nOn Mon, 15 Feb 2021 at 17:07, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> - The blocks in copyfrom.cc/copyto.c should be reworked - I don't think\n> we do this in our codebase.\n\nI saw this being used in (re)index progress reporting, that's where I\ntook inspiration from. It has been fixed in the attached version.\n\n> - I fir the \"io_target\" name misleading, because in some cases it's\n> actually the *source*.\n\nYes, I was also not quite happy with this, but couldn't find a better\none at the point of writing the initial patchset. Would\n\"io_operations\", \"io_port\", \"operates_through\" or \"through\" maybe be\nbetter?\n\n\nWith regards,\n\nMatthias van de Meent", "msg_date": "Thu, 18 Feb 2021 16:46:58 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "\n\nOn 2/18/21 4:46 PM, Matthias van de Meent wrote:\n> Hi,\n> \n> Thank you all for the suggestions. PFA version 8 of the patchset, in\n> which I have applied most of your comments. Unless explicitly named\n> below, I have applied the suggestions.\n> \n\nThanks.\n\n> \n> On Mon, 15 Feb 2021 at 17:07, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> - The blocks in copyfrom.cc/copyto.c should be reworked - I don't think\n>> we do this in our codebase.\n> \n> I saw this being used in (re)index progress reporting, that's where I\n> took inspiration from. It has been fixed in the attached version.\n> \n\nHmmm, good point. I haven't looked at the other places reporting\nprogress and I only ever saw this pattern in old code. I kinda dislike\nthese blocks, but admittedly that's rather subjective view. So if other\nsimilar places do this when reporting progress, this probably should\ntoo. What's your opinion on this?\n\n>> - I fir the \"io_target\" name misleading, because in some cases it's\n>> actually the *source*.\n> \n> Yes, I was also not quite happy with this, but couldn't find a better\n> one at the point of writing the initial patchset. Would\n> \"io_operations\", \"io_port\", \"operates_through\" or \"through\" maybe be\n> better?\n> \n\nNo idea. Let's see if someone has a better proposal ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Feb 2021 22:03:56 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Fri, Feb 19, 2021 at 2:34 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > On Mon, 15 Feb 2021 at 17:07, Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> - The blocks in copyfrom.cc/copyto.c should be reworked - I don't think\n> >> we do this in our codebase.\n> >\n> > I saw this being used in (re)index progress reporting, that's where I\n> > took inspiration from. It has been fixed in the attached version.\n> >\n>\n> Hmmm, good point. I haven't looked at the other places reporting\n> progress and I only ever saw this pattern in old code. I kinda dislike\n> these blocks, but admittedly that's rather subjective view. So if other\n> similar places do this when reporting progress, this probably should\n> too. What's your opinion on this?\n\nActually in the code base the style of that variable declaration and\nusage of pgstat_progress_update_multi_param is a mix. For instance, in\nlazy_scan_heap, ReindexRelationConcurrently, the variables are\ndeclared at the start of the function. And in _bt_spools_heapscan,\nindex_build, validate_index, perform_base_backup, the variables are\ndeclared within a separate block.\n\nIMO, we can have the arrays declared at the start of the functions\ni.e. the way it's done in v8-0001, because we can extend them for\nreporting some other parameter(maybe in future).\n\n> >> - I fir the \"io_target\" name misleading, because in some cases it's\n> >> actually the *source*.\n> >\n> > Yes, I was also not quite happy with this, but couldn't find a better\n> > one at the point of writing the initial patchset. Would\n> > \"io_operations\", \"io_port\", \"operates_through\" or \"through\" maybe be\n> > better?\n> >\n>\n> No idea. Let's see if someone has a better proposal ...\n\n For COPY TO the name \"source_type\" column and for COPY FROM the name\n\"destination_type\" makes sense. To have a combined column name for\nboth, how about naming that column as \"io_type\"?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Feb 2021 11:39:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Sat, Feb 20, 2021 at 11:39:22AM +0530, Bharath Rupireddy wrote:\n> Actually in the code base the style of that variable declaration and\n> usage of pgstat_progress_update_multi_param is a mix. For instance, in\n> lazy_scan_heap, ReindexRelationConcurrently, the variables are\n> declared at the start of the function. And in _bt_spools_heapscan,\n> index_build, validate_index, perform_base_backup, the variables are\n> declared within a separate block.\n\nI think that we should encourage the use of\npgstat_progress_update_multi_param() where we can, as it makes\nconsistent the updates to all the parameters according to\nst_changecount. That's also usually cleaner to store all the\nparameters that are changed if these are updated multiple times like\nthe REINDEX CONCURRENTLY ones. The context of the code also matters,\nof course.\n\nScanning through the patch set, 0002 is a good idea taken\nindependently.\n--\nMichael", "msg_date": "Sat, 20 Feb 2021 16:19:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Sat, Feb 20, 2021 at 12:49 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Sat, Feb 20, 2021 at 11:39:22AM +0530, Bharath Rupireddy wrote:\n> > Actually in the code base the style of that variable declaration and\n> > usage of pgstat_progress_update_multi_param is a mix. For instance, in\n> > lazy_scan_heap, ReindexRelationConcurrently, the variables are\n> > declared at the start of the function. And in _bt_spools_heapscan,\n> > index_build, validate_index, perform_base_backup, the variables are\n> > declared within a separate block.\n>\n> I think that we should encourage the use of\n> pgstat_progress_update_multi_param() where we can, as it makes\n> consistent the updates to all the parameters according to\n> st_changecount. That's also usually cleaner to store all the\n> parameters that are changed if these are updated multiple times like\n> the REINDEX CONCURRENTLY ones. The context of the code also matters,\n> of course.\n\nYeah. We could use pgstat_progress_update_multi_param instead of\npgstat_progress_update_param to update multiple params.\n\nOn a quick scan through the code, I found that we can do the following. If\nokay, I can start a new thread so that we don't divert the main thread\nhere. Thoughts?\n\n@@ -3686,12 +3686,18 @@ reindex_index(Oid indexId, bool\nskip_constraint_checks, char persistence,\n if (progress)\n {\n+ const int progress_cols[] = {\n+ PROGRESS_CREATEIDX_COMMAND,\n+ PROGRESS_CREATEIDX_INDEX_OID\n+ };\n+ const int64 progress_vals[] = {\n+ PROGRESS_CREATEIDX_COMMAND_REINDEX,\n+ indexId\n+ };\n+\n pgstat_progress_start_command(PROGRESS_COMMAND_CREATE_INDEX,\n heapId);\n- pgstat_progress_update_param(PROGRESS_CREATEIDX_COMMAND,\n- PROGRESS_CREATEIDX_COMMAND_REINDEX);\n- pgstat_progress_update_param(PROGRESS_CREATEIDX_INDEX_OID,\n- indexId);\n+ pgstat_progress_update_multi_param(2, progress_cols,\nprogress_vals);\n }\n@@ -1457,10 +1457,21 @@ DefineIndex(Oid relationId,\n set_indexsafe_procflags();\n /*\n- * The index is now visible, so we can report the OID.\n+ * The index is now visible, so we can report the OID. And also, report\n+ * Phase 2 of concurrent index build.\n */\n- pgstat_progress_update_param(PROGRESS_CREATEIDX_INDEX_OID,\n- indexRelationId);\n+ {\n+ const int progress_cols[] = {\n+ PROGRESS_CREATEIDX_INDEX_OID,\n+ PROGRESS_CREATEIDX_PHASE\n+ };\n+ const int64 progress_vals[] = {\n+ indexRelationId,\n+ PROGRESS_CREATEIDX_PHASE_WAIT_1\n+ };\n+\n+ pgstat_progress_update_multi_param(2, progress_cols,\nprogress_vals);\n+ }\n@@ -284,12 +284,9 @@ cluster_rel(Oid tableOid, Oid indexOid, ClusterParams\n*params)\n CHECK_FOR_INTERRUPTS();\n pgstat_progress_start_command(PROGRESS_COMMAND_CLUSTER, tableOid);\n- if (OidIsValid(indexOid))\n- pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n- PROGRESS_CLUSTER_COMMAND_CLUSTER);\n- else\n- pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n- PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n+ pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n+ OidIsValid(indexOid) ?\nPROGRESS_CLUSTER_COMMAND_CLUSTER :\n+ PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, Feb 20, 2021 at 12:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Feb 20, 2021 at 11:39:22AM +0530, Bharath Rupireddy wrote:\n> > Actually in the code base the style of that variable declaration and\n> > usage of pgstat_progress_update_multi_param is a mix. For instance, in\n> > lazy_scan_heap, ReindexRelationConcurrently, the variables are\n> > declared at the start of the function. And in _bt_spools_heapscan,\n> > index_build, validate_index, perform_base_backup, the variables are\n> > declared within a separate block.\n>\n> I think that we should encourage the use of\n> pgstat_progress_update_multi_param() where we can, as it makes\n> consistent the updates to all the parameters according to\n> st_changecount.  That's also usually cleaner to store all the\n> parameters that are changed if these are updated multiple times like\n> the REINDEX CONCURRENTLY ones.  The context of the code also matters,\n> of course.\n\nYeah. We could use pgstat_progress_update_multi_param instead of pgstat_progress_update_param to update multiple params.\n\nOn a quick scan through the code, I found that we can do the following. If okay, I can start a new thread so that we don't divert the main thread here. Thoughts?\n\n@@ -3686,12 +3686,18 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,\n      if (progress)\n     {\n+        const int    progress_cols[] = {\n+            PROGRESS_CREATEIDX_COMMAND,\n+            PROGRESS_CREATEIDX_INDEX_OID\n+        };\n+        const int64    progress_vals[] = {\n+            PROGRESS_CREATEIDX_COMMAND_REINDEX,\n+            indexId\n+        };\n+\n         pgstat_progress_start_command(PROGRESS_COMMAND_CREATE_INDEX,\n                                       heapId);\n-        pgstat_progress_update_param(PROGRESS_CREATEIDX_COMMAND,\n-                                     PROGRESS_CREATEIDX_COMMAND_REINDEX);\n-        pgstat_progress_update_param(PROGRESS_CREATEIDX_INDEX_OID,\n-                                     indexId);\n+        pgstat_progress_update_multi_param(2, progress_cols, progress_vals);\n     }\n@@ -1457,10 +1457,21 @@ DefineIndex(Oid relationId,\n         set_indexsafe_procflags();\n     /*\n-     * The index is now visible, so we can report the OID.\n+     * The index is now visible, so we can report the OID. And also, report\n+     * Phase 2 of concurrent index build.\n      */\n-    pgstat_progress_update_param(PROGRESS_CREATEIDX_INDEX_OID,\n-                                 indexRelationId);\n+    {\n+        const int    progress_cols[] = {\n+            PROGRESS_CREATEIDX_INDEX_OID,\n+            PROGRESS_CREATEIDX_PHASE\n+        };\n+        const int64    progress_vals[] = {\n+            indexRelationId,\n+            PROGRESS_CREATEIDX_PHASE_WAIT_1\n+        };\n+\n+        pgstat_progress_update_multi_param(2, progress_cols, progress_vals);\n+    }\n@@ -284,12 +284,9 @@ cluster_rel(Oid tableOid, Oid indexOid, ClusterParams *params)\n     CHECK_FOR_INTERRUPTS();\n      pgstat_progress_start_command(PROGRESS_COMMAND_CLUSTER, tableOid);\n-    if (OidIsValid(indexOid))\n-        pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n-                                     PROGRESS_CLUSTER_COMMAND_CLUSTER);\n-    else\n-        pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n-                                     PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n+    pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n+                                 OidIsValid(indexOid) ? PROGRESS_CLUSTER_COMMAND_CLUSTER :\n+                                 PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 20 Feb 2021 14:29:44 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "so 20. 2. 2021 v 7:09 odesílatel Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> napsal:\n>\n> On Fri, Feb 19, 2021 at 2:34 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > > On Mon, 15 Feb 2021 at 17:07, Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > >>\n> > >> - The blocks in copyfrom.cc/copyto.c should be reworked - I don't think\n> > >> we do this in our codebase.\n> > >\n> > > I saw this being used in (re)index progress reporting, that's where I\n> > > took inspiration from. It has been fixed in the attached version.\n> > >\n> >\n> > Hmmm, good point. I haven't looked at the other places reporting\n> > progress and I only ever saw this pattern in old code. I kinda dislike\n> > these blocks, but admittedly that's rather subjective view. So if other\n> > similar places do this when reporting progress, this probably should\n> > too. What's your opinion on this?\n>\n> Actually in the code base the style of that variable declaration and\n> usage of pgstat_progress_update_multi_param is a mix. For instance, in\n> lazy_scan_heap, ReindexRelationConcurrently, the variables are\n> declared at the start of the function. And in _bt_spools_heapscan,\n> index_build, validate_index, perform_base_backup, the variables are\n> declared within a separate block.\n>\n> IMO, we can have the arrays declared at the start of the functions\n> i.e. the way it's done in v8-0001, because we can extend them for\n> reporting some other parameter(maybe in future).\n>\n> > >> - I fir the \"io_target\" name misleading, because in some cases it's\n> > >> actually the *source*.\n> > >\n> > > Yes, I was also not quite happy with this, but couldn't find a better\n> > > one at the point of writing the initial patchset. Would\n> > > \"io_operations\", \"io_port\", \"operates_through\" or \"through\" maybe be\n> > > better?\n> > >\n> >\n> > No idea. Let's see if someone has a better proposal ...\n>\n> For COPY TO the name \"source_type\" column and for COPY FROM the name\n> \"destination_type\" makes sense. To have a combined column name for\n> both, how about naming that column as \"io_type\"?\n\n+1 on \"io_type\", that is my best candidate as well\n\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Feb 2021 14:17:30 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Sat, Feb 20, 2021 at 02:29:44PM +0530, Bharath Rupireddy wrote:\n> Yeah. We could use pgstat_progress_update_multi_param instead of\n> pgstat_progress_update_param to update multiple params.\n> \n> On a quick scan through the code, I found that we can do the following. If\n> okay, I can start a new thread so that we don't divert the main thread\n> here. Thoughts?\n\nHaving a separate thread to discuss this part would be right. This\nway any patches sent would attract the correct audience.\n--\nMichael", "msg_date": "Sun, 21 Feb 2021 08:23:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Sat, 20 Feb 2021 at 07:09, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> For COPY TO the name \"source_type\" column and for COPY FROM the name\n> \"destination_type\" makes sense. To have a combined column name for\n> both, how about naming that column as \"io_type\"?\n\nThank you, that's way better! PFA what I believe is a finalized\npatchset v9, utilizing io_type terminology instead of io_target.\n\n\nWith regards,\n\nMatthias van de Meent", "msg_date": "Sun, 21 Feb 2021 20:10:09 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Mon, Feb 22, 2021 at 12:40 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Sat, 20 Feb 2021 at 07:09, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > For COPY TO the name \"source_type\" column and for COPY FROM the name\n> > \"destination_type\" makes sense. To have a combined column name for\n> > both, how about naming that column as \"io_type\"?\n>\n> Thank you, that's way better! PFA what I believe is a finalized\n> patchset v9, utilizing io_type terminology instead of io_target.\n\nThanks for the patches. I reviewed them.\n\n0001 - I think there's a bug. See COPY TO stdout doesn't print\nio_type as \"STDIO\".\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n--------+-------+----------+-------+---------+---------+-----------------+-------------+------------------+-----------------\n 977510 | 13003 | postgres | 16384 | COPY TO | |\n23961591 | 0 | 2662399 | 0\n(1 row)\n\nwe should do below(like we do for copyfrom.c):\n\n@@ -702,7 +710,10 @@ BeginCopyTo(ParseState *pstate,\n if (pipe)\n {\n progress_vals[1] = PROGRESS_COPY_IO_TYPE_STDIO;\n Assert(!is_program); /* the grammar does not allow this */\n if (whereToSendOutput != DestRemote)\n cstate->copy_file = stdout;\n }\n\nBecause if \"pipe\" is true, that means the transfer is between STDIO.\nSee below comment:\n\n * If <pipe> is false, transfer is between the table and the file named\n * <filename>. Otherwise, transfer is between the table and our regular\n * input/output stream. The latter could be either stdin/stdout or a\n * socket, depending on whether we're running under Postmaster control.\n *\n\n0002 patch looks good to me.\n\n0003 - patch:\nI'm doubtful if the \"bytes_total\": 79 i.e. test file size will be the\nsame across different platforms and file systems types, if true, then\nthe below tests will not be stable. Do we also want to exclude the\nbytes_total from the output, just to be on the safer side? Thoughts?\n\ncopy progress_reporting from stdin;\n+INFO: progress: {\"command\": \"COPY FROM\", \"datname\": \"regression\",\n\"io_type\": \"STDIO\", \"bytes_total\": 0, \"bytes_processed\": 79,\n\"tuples_excluded\": 0, \"tuples_processed\": 3}\n+-- reporting of FILE imports, and correct reporting of tuples-excluded\n+copy progress_reporting from '@abs_srcdir@/data/emp.data'\n+ where (salary < 2000);\n+INFO: progress: {\"command\": \"COPY FROM\", \"datname\": \"regression\",\n\"io_type\": \"FILE\", \"bytes_total\": 79, \"bytes_processed\": 79,\n\"tuples_excluded\": 1, \"tuples_processed\": 2}\n+-- cleanup progress_reporting\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Feb 2021 10:19:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Mon, 22 Feb 2021 at 05:49, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Feb 22, 2021 at 12:40 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Sat, 20 Feb 2021 at 07:09, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > For COPY TO the name \"source_type\" column and for COPY FROM the name\n> > > \"destination_type\" makes sense. To have a combined column name for\n> > > both, how about naming that column as \"io_type\"?\n> >\n> > Thank you, that's way better! PFA what I believe is a finalized\n> > patchset v9, utilizing io_type terminology instead of io_target.\n>\n> Thanks for the patches. I reviewed them.\n>\n> 0001 - I think there's a bug. See COPY TO stdout doesn't print\n> io_type as \"STDIO\".\n\nFixed in attached\n\n> 0003 - patch:\n> I'm doubtful if the \"bytes_total\": 79 i.e. test file size will be the\n> same across different platforms and file systems types, if true, then\n> the below tests will not be stable. Do we also want to exclude the\n> bytes_total from the output, just to be on the safer side? Thoughts?\n\nI'm fairly certain that input files of the regression tests are\nconsidered 'binary files' to the test framework and that contents\ndon't change between different architectures or OSes. I also think\nthat any POSIX-compliant file system would report anything but the\nsize of the file contents, i.e. the size of the blob that is the file,\nand that is correctly reported here. Other than that, if bytes_total\nwouldn't be stable, then bytes_processed wouldn't make sense either.\n\nFor STDIN / STDOUT you might also have a point (different input\nmethods might have different length encodings for the specified\ninput), but insofar that I understand the test framework and the\nexpected git configurations, the tests run using UTF-8 / ascii only,\nwith a single style of newlines[+]. Sadly, I cannot provide examples\nnor outputs for other test framework settings due to my lack of\nexperience with running the tests with non-standard settings.\n\nNote, I'm happy to be proven wrong here, in which case I don't\ndisagree, but according to my limited knowledge, these outputs should\nbe stable.\n\n\nWith regards,\n\nMatthias van de Meent\n\n[+] Except when explicitly configured to run using non-standard\nconfigurations, in which case there are different expected output\nvalues to be configured for that configuration.", "msg_date": "Tue, 23 Feb 2021 10:27:24 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, Feb 23, 2021 at 2:57 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Mon, 22 Feb 2021 at 05:49, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Feb 22, 2021 at 12:40 AM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > >\n> > > On Sat, 20 Feb 2021 at 07:09, Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > For COPY TO the name \"source_type\" column and for COPY FROM the name\n> > > > \"destination_type\" makes sense. To have a combined column name for\n> > > > both, how about naming that column as \"io_type\"?\n> > >\n> > > Thank you, that's way better! PFA what I believe is a finalized\n> > > patchset v9, utilizing io_type terminology instead of io_target.\n> >\n> > Thanks for the patches. I reviewed them.\n> >\n> > 0001 - I think there's a bug. See COPY TO stdout doesn't print\n> > io_type as \"STDIO\".\n>\n> Fixed in attached\n\nThanks.\n\n> > 0003 - patch:\n> > I'm doubtful if the \"bytes_total\": 79 i.e. test file size will be the\n> > same across different platforms and file systems types, if true, then\n> > the below tests will not be stable. Do we also want to exclude the\n> > bytes_total from the output, just to be on the safer side? Thoughts?\n>\n> I'm fairly certain that input files of the regression tests are\n> considered 'binary files' to the test framework and that contents\n> don't change between different architectures or OSes. I also think\n> that any POSIX-compliant file system would report anything but the\n> size of the file contents, i.e. the size of the blob that is the file,\n> and that is correctly reported here. Other than that, if bytes_total\n> wouldn't be stable, then bytes_processed wouldn't make sense either.\n>\n> For STDIN / STDOUT you might also have a point (different input\n> methods might have different length encodings for the specified\n> input), but insofar that I understand the test framework and the\n> expected git configurations, the tests run using UTF-8 / ascii only,\n> with a single style of newlines[+]. Sadly, I cannot provide examples\n> nor outputs for other test framework settings due to my lack of\n> experience with running the tests with non-standard settings.\n>\n> Note, I'm happy to be proven wrong here, in which case I don't\n> disagree, but according to my limited knowledge, these outputs should\n> be stable.\n\nI'm no expert in different OS architectures, but I see that the\npatches are passing on cf bot where they get tested on Windows,\nFreeBSD, Linux and macOS platforms.\n\nI have no further comments on the v10 patch set. I tested the patches,\nthey work as expected. I will mark the cf entry as \"Ready for\nCommitter\".\n\nBelow are some snapshots from testing:\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+---------+-----------------+-------------+------------------+-----------------\n 1089927 | 13003 | postgres | 16384 | COPY FROM | FILE |\n104660992 | 888888898 | 12861112 | 0\n 1089969 | 13003 | postgres | 16384 | COPY FROM | FILE |\n76611584 | 888888898 | 9712999 | 0\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+---------+-----------------+-------------+------------------+-----------------\n 1089927 | 13003 | postgres | 16384 | COPY FROM | FILE |\n203161600 | 888888898 | 0 | 23804080\n 1089969 | 13003 | postgres | 16384 | COPY FROM | FILE |\n150601728 | 888888898 | 0 | 17961241\n\n\n postgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+---------+---------+-----------------+-------------+------------------+-----------------\n 1089927 | 13003 | postgres | 16384 | COPY TO | FILE |\n66806479 | 0 | 7422942 | 0\n 1089969 | 13003 | postgres | 16384 | COPY TO | FILE |\n29803951 | 0 | 3311550 | 0\n\n postgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+---------+---------+-----------------+-------------+------------------+-----------------\n 1089927 | 13003 | postgres | 16384 | COPY TO | STDIO |\n5998293 | 0 | 666477 | 0\n 1089969 | 13003 | postgres | 16384 | COPY TO | STDIO |\n2780586 | 0 | 308954 | 0\n\n postgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+---------+---------+-----------------+-------------+------------------+-----------------\n 1089927 | 13003 | postgres | 0 | COPY TO | FILE |\n124447239 | 0 | 13827471 | 0\n 1089969 | 13003 | postgres | 0 | COPY TO | FILE |\n90992466 | 0 | 10110274 | 0\n\npostgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+---------+-----------------+-------------+------------------+-----------------\n 1090927 | 13003 | postgres | 16384 | COPY FROM | STDIO |\n492465897 | 0 | 55952999 | 0\n 1091000 | 13003 | postgres | 16384 | COPY FROM | STDIO |\n30494360 | 0 | 3950683 | 0\n\n postgres=# select * from pg_stat_progress_copy;\n pid | datid | datname | relid | command | io_type |\nbytes_processed | bytes_total | tuples_processed | tuples_excluded\n---------+-------+----------+-------+-----------+---------+-----------------+-------------+------------------+-----------------\n 1091217 | 13003 | postgres | 16384 | COPY FROM | STDIO |\n230516127 | 0 | 0 | 26847469\n 1091224 | 13003 | postgres | 16384 | COPY FROM | STDIO |\n212020065 | 0 | 0 | 24792351\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Feb 2021 08:42:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, Feb 23, 2021 at 10:27:24AM +0100, Matthias van de Meent wrote:\n> Note, I'm happy to be proven wrong here, in which case I don't\n> disagree, but according to my limited knowledge, these outputs should\n> be stable.\n\nI am planning to look more at 0001 and 0003, but for now I have been\nlooking at 0002 which is interesting on its own.\n\n+ <structname>pg_stat_progress_vacuum</structname> view. Backends running\n+ <command>VACUUM</command> with the <literal>FULL</literal> option report\n+ progress in the <structname>pg_stat_progress_cluster</structname> instead.\nYou have missed one \"view\" after pg_stat_progress_cluster here.\nExcept that, this stuff looks fine. So I'd like to apply it if there\nare no objections.\n--\nMichael", "msg_date": "Wed, 24 Feb 2021 16:46:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Sun, Feb 21, 2021 at 08:10:09PM +0100, Matthias van de Meent wrote:\n> Subject: [PATCH v9 1/3] Add progress-reported components for COPY progress\n> reporting\n\n> \t\t\t/* Increment amount of processed tuples and update the progress */\n> \t/* Increment amount of processed tuples and update the progress */\n\nIdeally, this would say \"number of processed tuples\"\n\n> Subject: [PATCH v9 2/3] Add backlinks to progress reporting documentation\n\nI think these should say that they report their progress *in* the view (not\n\"to\"):\n\n> + Each backend running <command>ANALYZE</command> will report its progress to\n> + the <structname>pg_stat_progress_analyze</structname> view. See\n\n> + Each backend running <command>CLUSTER</command> will report its progress to\n> + the <structname>pg_stat_progress_cluster</structname> view. See\n\n> + Each backend running <command>COPY</command> will report its progress to\n> + the <structname>pg_stat_progress_copy</structname> view. See\n\n> + Each backend running <command>CREATE INDEX</command> will report its\n> + progress to the <structname>pg_stat_progress_create_index</structname>\n\n> + Each backend running <command>REINDEX</command> will report its progress\n> + to the <structname>pg_stat_progress_create_index</structname> view. See\n\nLike this one:\n\n> + Each backend running <command>VACUUM</command> without the\n> + <literal>FULL</literal> option will report its progress in the\n\nI'm sorry I didn't include that in last week's message. You could also write:\n\n|\"The progress of each backend running >ANALYZE< is reported in the >pg_stat_progress_analyze< view.\"\n\nLooking at the existing docs:\n\nhttps://www.postgresql.org/docs/devel/progress-reporting.html#COPY-PROGRESS-REPORTING\n| OID of the table on which the COPY command is executed\n\nMaybe it should say \".. is executing\". Or \".. being executed\":\n| OID of the table on which the COPY command is being executed\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 24 Feb 2021 01:53:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Wed, Feb 24, 2021 at 01:53:03AM -0600, Justin Pryzby wrote:\n> On Sun, Feb 21, 2021 at 08:10:09PM +0100, Matthias van de Meent wrote:\n> I think these should say that they report their progress *in* the view (not\n> \"to\"):\n> \n> > + Each backend running <command>ANALYZE</command> will report its progress to\n> > + the <structname>pg_stat_progress_analyze</structname> view. See\n\nWhat is proposed in the patch is:\n\"Each backend running <command>blah</> will report its progress to the\npg_stat_progress_blah view.\"\n\nWhat you propose is:\n\"Each backend running <command>blah</> will report its progress in the\npg_stat_progress_blah view.\"\n\nWhat pg_basebackup tells is:\n\"Whenever <application>pg_basebackup</application> is taking a base\nbackup, the server's pg_stat_progress_basebackup view will report the\nprogress of the backup.\"\n\nHere is an extra idea:\n\"Whenever a backend runs <command>blah</>, the server's\npg_stat_progress_blah will report the progress of this command.\"\n--\nMichael", "msg_date": "Thu, 25 Feb 2021 17:14:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, Feb 25, 2021 at 1:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 24, 2021 at 01:53:03AM -0600, Justin Pryzby wrote:\n> > On Sun, Feb 21, 2021 at 08:10:09PM +0100, Matthias van de Meent wrote:\n> > I think these should say that they report their progress *in* the view (not\n> > \"to\"):\n> >\n> > > + Each backend running <command>ANALYZE</command> will report its progress to\n> > > + the <structname>pg_stat_progress_analyze</structname> view. See\n>\n> What is proposed in the patch is:\n> \"Each backend running <command>blah</> will report its progress to the\n> pg_stat_progress_blah view.\"\n>\n> What you propose is:\n> \"Each backend running <command>blah</> will report its progress in the\n> pg_stat_progress_blah view.\"\n\nIMO, the phrasing proposed by Justin upthread looks good. It's like this:\n\n> + Each backend running <command>ANALYZE</command> will report its progress in\n> + the <structname>pg_stat_progress_analyze</structname> view. See\n\n> + Each backend running <command>CREATE INDEX</command> will report its\n> + progress in the <structname>pg_stat_progress_create_index</structname> view\n\n...\n...\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Mar 2021 12:16:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Wed, Feb 24, 2021 at 1:23 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, Feb 21, 2021 at 08:10:09PM +0100, Matthias van de Meent wrote:\n> > Subject: [PATCH v9 1/3] Add progress-reported components for COPY progress\n> > reporting\n>\n> > /* Increment amount of processed tuples and update the progress */\n> > /* Increment amount of processed tuples and update the progress */\n>\n> Ideally, this would say \"number of processed tuples\"\n\nCorrect. It's introduced by the original COPY progress reporting\npatch. Having said that, we could just correct them in the 0001 patch.\n\n> Looking at the existing docs:\n>\n> https://www.postgresql.org/docs/devel/progress-reporting.html#COPY-PROGRESS-REPORTING\n> | OID of the table on which the COPY command is executed\n>\n> Maybe it should say \".. is executing\". Or \".. being executed\":\n> | OID of the table on which the COPY command is being executed\n\n+1 for changing it to \"OID of the table on which the COPY command is\nbeing executed.\"\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Mar 2021 12:21:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, Mar 04, 2021 at 12:16:17PM +0530, Bharath Rupireddy wrote:\n> IMO, the phrasing proposed by Justin upthread looks good. It's like this:\n> \n> > + Each backend running <command>ANALYZE</command> will report its progress in\n> > + the <structname>pg_stat_progress_analyze</structname> view. See\n\nNo objections to just go with that. As a new patch set is needed, I\nam switching the CF entry to \"Waiting on Author\".\n--\nMichael", "msg_date": "Thu, 4 Mar 2021 19:38:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, 4 Mar 2021 at 11:38, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Mar 04, 2021 at 12:16:17PM +0530, Bharath Rupireddy wrote:\n> > IMO, the phrasing proposed by Justin upthread looks good. It's like this:\n> >\n> > > + Each backend running <command>ANALYZE</command> will report its progress in\n> > > + the <structname>pg_stat_progress_analyze</structname> view. See\n>\n> No objections to just go with that. As a new patch set is needed, I\n> am switching the CF entry to \"Waiting on Author\".\n\nThanks for all your comments, and sorry for the delayed response.\nPlease find attached a new version of the patch set, that is rebased\nand contains the requested changes:\n\n1/3:\nDocs:\n- on which the COPY command is executed\n+ on which the COPY command is being executed\nReworded existing commment:\n- /* Increment amount of processed tuples and update the progress */\n+ /* Increment the number of processed tuples, and report the progress */\n\n2/3:\nDocs:\n- ... report its progress to ...\n+ ... report its progress in ...\n- report its progress to the >pg_stat_progress_cluster< ...\n+ report its progress in the >pg_stat_progress_cluster< view ...\n\n3/3:\nNo changes\n\nI believe that that was the extent of the not-yet-resolved comments\nand suggestions.\n\n\nWith regards,\n\nMatthias van de Meent.", "msg_date": "Thu, 4 Mar 2021 12:32:38 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, Mar 4, 2021 at 5:02 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 4 Mar 2021 at 11:38, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Mar 04, 2021 at 12:16:17PM +0530, Bharath Rupireddy wrote:\n> > > IMO, the phrasing proposed by Justin upthread looks good. It's like this:\n> > >\n> > > > + Each backend running <command>ANALYZE</command> will report its progress in\n> > > > + the <structname>pg_stat_progress_analyze</structname> view. See\n> >\n> > No objections to just go with that. As a new patch set is needed, I\n> > am switching the CF entry to \"Waiting on Author\".\n>\n> Thanks for all your comments, and sorry for the delayed response.\n> Please find attached a new version of the patch set, that is rebased\n> and contains the requested changes:\n>\n> 1/3:\n> Docs:\n> - on which the COPY command is executed\n> + on which the COPY command is being executed\n> Reworded existing commment:\n> - /* Increment amount of processed tuples and update the progress */\n> + /* Increment the number of processed tuples, and report the progress */\n\nLGTM.\n\n> 2/3:\n> Docs:\n> - ... report its progress to ...\n> + ... report its progress in ...\n> - report its progress to the >pg_stat_progress_cluster< ...\n> + report its progress in the >pg_stat_progress_cluster< view ...\n\n+ <para>\n+ Each backend running <command>VACUUM</command> without the\n+ <literal>FULL</literal> option will report its progress in the\n+ <structname>pg_stat_progress_vacuum</structname> view. Backends running\n+ <command>VACUUM</command> with the <literal>FULL</literal> option report\n+ progress in the <structname>pg_stat_progress_cluster</structname> view\n+ instead. See <xref linkend=\"vacuum-progress-reporting\"/> and\n+ <xref linkend=\"cluster-progress-reporting\"/> for details.\n+ </para>\n\nI think a typo, missing \"will\" between option and report - it's \"with\nthe <literal>FULL</literal> option will report\"\n\nExcept the above typo, 0002 LGTM.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Mar 2021 18:06:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "čt 4. 3. 2021 v 12:32 odesílatel Matthias van de Meent\n<boekewurm+postgres@gmail.com> napsal:\n>\n> On Thu, 4 Mar 2021 at 11:38, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Mar 04, 2021 at 12:16:17PM +0530, Bharath Rupireddy wrote:\n> > > IMO, the phrasing proposed by Justin upthread looks good. It's like this:\n> > >\n> > > > + Each backend running <command>ANALYZE</command> will report its progress in\n> > > > + the <structname>pg_stat_progress_analyze</structname> view. See\n> >\n> > No objections to just go with that. As a new patch set is needed, I\n> > am switching the CF entry to \"Waiting on Author\".\n>\n> Thanks for all your comments, and sorry for the delayed response.\n> Please find attached a new version of the patch set, that is rebased\n> and contains the requested changes:\n>\n> 1/3:\n> Docs:\n> - on which the COPY command is executed\n> + on which the COPY command is being executed\n> Reworded existing commment:\n> - /* Increment amount of processed tuples and update the progress */\n> + /* Increment the number of processed tuples, and report the progress */\n>\n> 2/3:\n> Docs:\n> - ... report its progress to ...\n> + ... report its progress in ...\n> - report its progress to the >pg_stat_progress_cluster< ...\n> + report its progress in the >pg_stat_progress_cluster< view ...\n>\n> 3/3:\n> No changes\n>\n> I believe that that was the extent of the not-yet-resolved comments\n> and suggestions.\n\nLGTM, special thanks for taking over the work on initial COPY progress\nregression tests.\n\n>\n> With regards,\n>\n> Matthias van de Meent.\n\n\n", "msg_date": "Thu, 4 Mar 2021 14:44:26 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, 4 Mar 2021 at 13:36, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> + <para>\n> + Each backend running <command>VACUUM</command> without the\n> + <literal>FULL</literal> option will report its progress in the\n> + <structname>pg_stat_progress_vacuum</structname> view. Backends running\n> + <command>VACUUM</command> with the <literal>FULL</literal> option report\n> + progress in the <structname>pg_stat_progress_cluster</structname> view\n> + instead. See <xref linkend=\"vacuum-progress-reporting\"/> and\n> + <xref linkend=\"cluster-progress-reporting\"/> for details.\n> + </para>\n>\n> I think a typo, missing \"will\" between option and report - it's \"with\n> the <literal>FULL</literal> option will report\"\n\n\"Backends running [...] report progress to [...] instead\" is,\na.f.a.i.k., correct English. Adding 'will' would indeed still be\ncorrect, but it would in my opinion also be decremental to the\nreadability of the section due to the repeated use of the same\ntemplate sentence structure. I think that keeping it as-is is just\nfine.\n\n\n\nWith regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Thu, 4 Mar 2021 17:19:18 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, Mar 04, 2021 at 05:19:18PM +0100, Matthias van de Meent wrote:\n> On Thu, 4 Mar 2021 at 13:36, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > + <para>\n> > + Each backend running <command>VACUUM</command> without the\n> > + <literal>FULL</literal> option will report its progress in the\n> > + <structname>pg_stat_progress_vacuum</structname> view. Backends running\n> > + <command>VACUUM</command> with the <literal>FULL</literal> option report\n> > + progress in the <structname>pg_stat_progress_cluster</structname> view\n> > + instead. See <xref linkend=\"vacuum-progress-reporting\"/> and\n> > + <xref linkend=\"cluster-progress-reporting\"/> for details.\n> > + </para>\n> >\n> > I think a typo, missing \"will\" between option and report - it's \"with\n> > the <literal>FULL</literal> option will report\"\n> \n> \"Backends running [...] report progress to [...] instead\" is,\n> a.f.a.i.k., correct English. Adding 'will' would indeed still be\n> correct, but it would in my opinion also be decremental to the\n> readability of the section due to the repeated use of the same\n> template sentence structure. I think that keeping it as-is is just\n> fine.\n\nI'd prefer to see the same thing repeated, since then it's easy to compare, for\nreaders, and also future doc authors. That's normal in technical documentation\nto have redundancy. It's easy to read.\n\nI'd suggest to move \"instead\" into the middle of the sentence,\nand combine VACUUM+FULL, and add \"their\":\n\n> > + ... Backends running <command>VACUUM FULL</literal> will instead report\n> > + their progress in the <structname>pg_stat_progress_cluster</structname> view.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 4 Mar 2021 10:29:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, 4 Mar 2021 at 17:29, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 04, 2021 at 05:19:18PM +0100, Matthias van de Meent wrote:\n> >\n> > \"Backends running [...] report progress to [...] instead\" is,\n> > a.f.a.i.k., correct English. Adding 'will' would indeed still be\n> > correct, but it would in my opinion also be decremental to the\n> > readability of the section due to the repeated use of the same\n> > template sentence structure. I think that keeping it as-is is just\n> > fine.\n>\n> I'd prefer to see the same thing repeated, since then it's easy to compare, for\n> readers, and also future doc authors. That's normal in technical documentation\n> to have redundancy. It's easy to read.\n>\n> I'd suggest to move \"instead\" into the middle of the sentence,\n> and combine VACUUM+FULL, and add \"their\":\n>\n> > > + ... Backends running <command>VACUUM FULL</literal> will instead report\n> > > + their progress in the <structname>pg_stat_progress_cluster</structname> view.\n\nSure, I'm convinced. PFA the patchset with this change applied.\n\nWith regards,\n\nMatthias van de Meent", "msg_date": "Thu, 4 Mar 2021 17:45:50 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Thu, Mar 04, 2021 at 05:45:50PM +0100, Matthias van de Meent wrote:\n> Sure, I'm convinced. PFA the patchset with this change applied.\n\n0002 looks fine to me, and in line with the discussion, so applied.\n--\nMichael", "msg_date": "Fri, 5 Mar 2021 15:00:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Fri, Mar 5, 2021 at 11:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Mar 04, 2021 at 05:45:50PM +0100, Matthias van de Meent wrote:\n> > Sure, I'm convinced. PFA the patchset with this change applied.\n>\n> 0002 looks fine to me, and in line with the discussion, so applied.\n\nThanks.\n\nAttaching remaining patches 0001 and 0003 from the v11 patch\nset(posted upthread) here to make cfbot happier.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 7 Mar 2021 16:50:31 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Sun, Mar 07, 2021 at 04:50:31PM +0530, Bharath Rupireddy wrote:\n> Attaching remaining patches 0001 and 0003 from the v11 patch\n> set(posted upthread) here to make cfbot happier.\n\nLooking at patch 0002, the location of each progress report looks good\nto me. I have some issues with some of the names chosen though, so I\nwould like to suggest a few changes to simplify things:\n- PROGRESS_COPY_IO_TYPE_* => PROGRESS_COPY_TYPE_*\n- PROGRESS_COPY_IO_TYPE => PROGRESS_COPY_TYPE\n- PROGRESS_COPY_TYPE_STDIO => PROGRESS_COPY_TYPE_PIPE\n- In pg_stat_progress_copy, io_type => type\n\nIt seems a bit confusing to not count insertions on foreign tables\nwhere nothing happened. I am fine to live with that, but can I ask if\nthis has been thought about?\n--\nMichael", "msg_date": "Mon, 8 Mar 2021 17:23:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Mon, 8 Mar 2021 at 09:24, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Mar 07, 2021 at 04:50:31PM +0530, Bharath Rupireddy wrote:\n> > Attaching remaining patches 0001 and 0003 from the v11 patch\n> > set(posted upthread) here to make cfbot happier.\n>\n> Looking at patch 0002, the location of each progress report looks good\n> to me. I have some issues with some of the names chosen though, so I\n> would like to suggest a few changes to simplify things:\n> - PROGRESS_COPY_IO_TYPE_* => PROGRESS_COPY_TYPE_*\n> - PROGRESS_COPY_IO_TYPE => PROGRESS_COPY_TYPE\n> - PROGRESS_COPY_TYPE_STDIO => PROGRESS_COPY_TYPE_PIPE\n> - In pg_stat_progress_copy, io_type => type\n\nSeems reasonable. PFA updated patches. I've renamed the previous 0003\nto 0002 to keep git-format-patch easy.\n\n> It seems a bit confusing to not count insertions on foreign tables\n> where nothing happened. I am fine to live with that, but can I ask if\n> this has been thought about?\n\nThis is keeping current behaviour of the implementation as committed\nwith 8a4f618e, with the rationale of that patch being that this number\nshould mirror the number returned by the copy command.\n\nI am not opposed to adding another column for `tuples_inserted` and\nchanging the logic accordingly (see prototype 0003), but that was not\nin the intended scope of this patchset. Unless you think that this\nshould be included in this current patchset, I'll spin that patch out\ninto a different thread, but I'm not sure that would make it into\npg14.\n\nWith regards,\n\nMatthias van de Meent.", "msg_date": "Mon, 8 Mar 2021 17:33:40 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Mon, Mar 08, 2021 at 05:33:40PM +0100, Matthias van de Meent wrote:\n> Seems reasonable. PFA updated patches. I've renamed the previous 0003\n> to 0002 to keep git-format-patch easy.\n\nThanks for updating the patch. 0001 has been applied, after tweaking\na bit comments, indentation and the docs.\n\n> This is keeping current behaviour of the implementation as committed\n> with 8a4f618e, with the rationale of that patch being that this number\n> should mirror the number returned by the copy command.\n> \n> I am not opposed to adding another column for `tuples_inserted` and\n> changing the logic accordingly (see prototype 0003), but that was not\n> in the intended scope of this patchset. Unless you think that this\n> should be included in this current patchset, I'll spin that patch out\n> into a different thread, but I'm not sure that would make it into\n> pg14.\n\nOkay, point taken. If there is demand for it in the future, we could\nextend the existing set of columns. After thinking more about it the\nusecase if not completely clear to me from a monitoring point of\nview.\n\nI have not looked at 0002 in details yet, but I am wondering first if\nthe size estimations in the expected output are actually portable.\nSecond, I doubt a bit that the extra cycles spent on that are actually\nworth the coverage, even if the trick with an AFTER INSERT trigger is\ninteresting.\n--\nMichael", "msg_date": "Tue, 9 Mar 2021 14:34:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, Mar 9, 2021 at 11:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > This is keeping current behaviour of the implementation as committed\n> > with 8a4f618e, with the rationale of that patch being that this number\n> > should mirror the number returned by the copy command.\n> >\n> > I am not opposed to adding another column for `tuples_inserted` and\n> > changing the logic accordingly (see prototype 0003), but that was not\n> > in the intended scope of this patchset. Unless you think that this\n> > should be included in this current patchset, I'll spin that patch out\n> > into a different thread, but I'm not sure that would make it into\n> > pg14.\n>\n> Okay, point taken. If there is demand for it in the future, we could\n> extend the existing set of columns. After thinking more about it the\n> usecase if not completely clear to me from a monitoring point of\n> view.\n\nI think, for now, having better comments on what's included and what's\nnot in the existing tuples_processed column would help.\n\n> I have not looked at 0002 in details yet, but I am wondering first if\n> the size estimations in the expected output are actually portable.\n> Second, I doubt a bit that the extra cycles spent on that are actually\n> worth the coverage, even if the trick with an AFTER INSERT trigger is\n> interesting.\n\nI was having the same doubt [1], please have a look at it and the few\nthreads that follow. IMO, we can have the tests without the\n\"bytes_total\" column if we think that it's not stable across all OS\nplatforms. I think those tests don't take much time to run and AFAIK\nit's the first progress report command to have tests because others\nsuch as analyse, vacuum, cluster, base backup don't have.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACUyNREth8f3M7wXrHVeycfnqBn5pVygYOoBVs%3Difo8V4A%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Mar 2021 13:32:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "út 9. 3. 2021 v 6:34 odesílatel Michael Paquier <michael@paquier.xyz> napsal:\n>\n> On Mon, Mar 08, 2021 at 05:33:40PM +0100, Matthias van de Meent wrote:\n> > Seems reasonable. PFA updated patches. I've renamed the previous 0003\n> > to 0002 to keep git-format-patch easy.\n>\n> Thanks for updating the patch. 0001 has been applied, after tweaking\n> a bit comments, indentation and the docs.\n>\n> > This is keeping current behaviour of the implementation as committed\n> > with 8a4f618e, with the rationale of that patch being that this number\n> > should mirror the number returned by the copy command.\n> >\n> > I am not opposed to adding another column for `tuples_inserted` and\n> > changing the logic accordingly (see prototype 0003), but that was not\n> > in the intended scope of this patchset. Unless you think that this\n> > should be included in this current patchset, I'll spin that patch out\n> > into a different thread, but I'm not sure that would make it into\n> > pg14.\n>\n> Okay, point taken. If there is demand for it in the future, we could\n> extend the existing set of columns. After thinking more about it the\n> usecase if not completely clear to me from a monitoring point of\n> view.\n>\n> I have not looked at 0002 in details yet, but I am wondering first if\n> the size estimations in the expected output are actually portable.\n> Second, I doubt a bit that the extra cycles spent on that are actually\n> worth the coverage, even if the trick with an AFTER INSERT trigger is\n> interesting.\n\nThose extra cycles are in there to cover at least parts of the COPY\nprogress from being accidentally broken. I have seen various patches\nmodifying COPY command being currently in progress. It would be nice\nto ensure at least basic functionality works well in an automated way.\nOn my machine there is no huge overhead added by adding those tests\n(they finish almost instantly).\n\n> --\n> Michael\n\n\n", "msg_date": "Tue, 9 Mar 2021 11:39:48 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Tue, 9 Mar 2021 at 06:34, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 08, 2021 at 05:33:40PM +0100, Matthias van de Meent wrote:\n> > Seems reasonable. PFA updated patches. I've renamed the previous 0003\n> > to 0002 to keep git-format-patch easy.\n>\n> Thanks for updating the patch. 0001 has been applied, after tweaking\n> a bit comments, indentation and the docs.\n\nThanks!\n\n> > This is keeping current behaviour of the implementation as committed\n> > with 8a4f618e, with the rationale of that patch being that this number\n> > should mirror the number returned by the copy command.\n> >\n> > I am not opposed to adding another column for `tuples_inserted` and\n> > changing the logic accordingly (see prototype 0003), but that was not\n> > in the intended scope of this patchset. Unless you think that this\n> > should be included in this current patchset, I'll spin that patch out\n> > into a different thread, but I'm not sure that would make it into\n> > pg14.\n>\n> Okay, point taken. If there is demand for it in the future, we could\n> extend the existing set of columns. After thinking more about it the\n> usecase if not completely clear to me from a monitoring point of\n> view.\n>\n> I have not looked at 0002 in details yet, but I am wondering first if\n> the size estimations in the expected output are actually portable.\n> Second, I doubt a bit that the extra cycles spent on that are actually\n> worth the coverage, even if the trick with an AFTER INSERT trigger is\n> interesting.\n\nThere are examples in which pg_stat_progress_* -views report\ninaccurate data. I think it is fairly reasonable to at least validate\nsome part of the progress reporting, as it is one of the few methods\nfor administrators to look at the state of currently running\nadministrative tasks, and as such, this user-visible api should be\nvalidated.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 10 Mar 2021 09:35:10 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Wed, Mar 10, 2021 at 09:35:10AM +0100, Matthias van de Meent wrote:\n> There are examples in which pg_stat_progress_* -views report\n> inaccurate data. I think it is fairly reasonable to at least validate\n> some part of the progress reporting, as it is one of the few methods\n> for administrators to look at the state of currently running\n> administrative tasks, and as such, this user-visible api should be\n> validated.\n\nLooking closer at 0002, the size numbers are actually incorrect on\nWindows for the second query. The CRLFs at the end of each line of\nemp.data add three bytes to the report of COPY FROM, so this finishes\nwith 82 bytes for bytes_total and bytes_processed instead of 79.\nLet's make this useful but simpler here, so I propose to check that\nthe counters are higher than zero instead of an exact number. Let's\nalso add the relation name relid::regclass while on it.\n\nThe tests introduced are rather limited, but you are right that\nsomething is better than nothing here, and I have slightly updated\nwhat the tests sent previously as per the attached. What do you\nthink?\n--\nMichael", "msg_date": "Mon, 15 Mar 2021 13:53:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Mon, 15 Mar 2021 at 05:53, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 10, 2021 at 09:35:10AM +0100, Matthias van de Meent wrote:\n> > There are examples in which pg_stat_progress_* -views report\n> > inaccurate data. I think it is fairly reasonable to at least validate\n> > some part of the progress reporting, as it is one of the few methods\n> > for administrators to look at the state of currently running\n> > administrative tasks, and as such, this user-visible api should be\n> > validated.\n>\n> Looking closer at 0002, the size numbers are actually incorrect on\n> Windows for the second query. The CRLFs at the end of each line of\n> emp.data add three bytes to the report of COPY FROM, so this finishes\n> with 82 bytes for bytes_total and bytes_processed instead of 79.\n\nHmm, does CFBot not run checkout on windows with crlf line endings? I\nhad expected it to do as such.\n\n> Let's make this useful but simpler here, so I propose to check that\n> the counters are higher than zero instead of an exact number. Let's\n> also add the relation name relid::regclass while on it.\n\n+1, I hadn't thought of casting relid to its regclass to get a stable\nidentifier.\n\n> The tests introduced are rather limited, but you are right that\n> something is better than nothing here, and I have slightly updated\n> what the tests sent previously as per the attached. What do you\n> think?\n\nThat seems great, thanks for picking this up.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 15 Mar 2021 12:43:40 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improvements and additions to COPY progress reporting" }, { "msg_contents": "On Mon, Mar 15, 2021 at 12:43:40PM +0100, Matthias van de Meent wrote:\n> Hmm, does CFBot not run checkout on windows with crlf line endings? I\n> had expected it to do as such.\n\nThis is environment-sensitive, so I am not surprised that Appveyor\nchanges the way newlines are handled there. I could see the\ndifference by running the tests manually on Windows command prompt for\nexample.\n\n> That seems great, thanks for picking this up.\n\nOkay. Applied, then.\n--\nMichael", "msg_date": "Tue, 16 Mar 2021 10:00:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Improvements and additions to COPY progress reporting" } ]
[ { "msg_contents": "I noticed that the file src/backend/utils/adt/inet_cidr_ntop.c has no \ntest coverage at all. The only way to reach this appears to be by \ncalling abbrev(cidr). It was easy to add a test case for this into the \nexisting, otherwise pretty complete, cidr tests.", "msg_date": "Mon, 8 Feb 2021 19:40:50 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "small test case for abbrev(cidr)" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I noticed that the file src/backend/utils/adt/inet_cidr_ntop.c has no \n> test coverage at all. The only way to reach this appears to be by \n> calling abbrev(cidr). It was easy to add a test case for this into the \n> existing, otherwise pretty complete, cidr tests.\n\nSeems reasonable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Feb 2021 14:29:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: small test case for abbrev(cidr)" } ]
[ { "msg_contents": "Hi everyone,\nI wanted to see why we do not allow the following statements to be allowed\nwithin a transaction block:\n1. Create database\n2. Drop Database\nIs there a detailed reasoning behind disallowing the above statements as\npart of the design. Will appreciate it if someone can share on why postgres\ndoes not allow these statements inside a transaction block.\n\nThank you\n\nHi everyone,I wanted to see why we do not allow the following statements to be allowed within a transaction block:1. Create database2. Drop DatabaseIs there a detailed reasoning behind disallowing the above statements as part of the design. Will appreciate it if someone can share on why postgres does not allow these statements inside a transaction block.Thank you", "msg_date": "Mon, 8 Feb 2021 11:01:36 -0800", "msg_from": "Hari Sankar <harisankars2003@gmail.com>", "msg_from_op": true, "msg_subject": "Allowing create database within transaction block" }, { "msg_contents": "Hari Sankar <harisankars2003@gmail.com> writes:\n> I wanted to see why we do not allow the following statements to be allowed\n> within a transaction block:\n> 1. Create database\n> 2. Drop Database\n> Is there a detailed reasoning behind disallowing the above statements as\n> part of the design. Will appreciate it if someone can share on why postgres\n> does not allow these statements inside a transaction block.\n\nMostly it's that create and drop database consist of a filesystem tree\ncopy and a filesystem recursive delete respectively. So there isn't any\ndetailed WAL log entry for them, and no way to roll back at transaction\nabort.\n\nIt might be possible to convert these to roll-back-able operations by\nremembering that a recursive delete has to be done during transaction\nabort (for the create case) or commit (for the delete case), much as\nwe do for table create/drop cases. That's a bit scary however,\nremembering that it's totally not acceptable to throw any sort of\nerror at that stage of a transaction commit. Any failure during the\nrecursive delete would likely end up in leaking a lot of disk space\nfrom files we failed to delete.\n\nShort answer is that it could probably be done if someone wanted to\nput enough effort into it, but the cost/benefit ratio hasn't seemed\nattractive.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 13:58:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allowing create database within transaction block" } ]
[ { "msg_contents": "Hi Craig, Robert,\n\n\nThe 011_crash_recovery.pl test test starts a transaction, creates a\ntable, fetches the transaction's xid. Then shuts down the server in\nimmediate mode. It then asserts that after crash recovery the previously\nassigned xid is shown as aborted, and that new xids are assigned after\nthe xid.\n\nBut as far as I can tell there's no guarantee that that is the case.\n\nIt only happens to work because the test - for undocumented reasons -\ncreates the install with $node->init(allows_streaming => 1), which in\nturn restricts shared_buffers to 1MB. Which forces the test to flush WAL\nto disk during the CREATE TABLE.\n\nI see failures in the test both when I increase the 1MB or when I change\nthe buffer replacement logic sufficiently.\n\nE.g.\nnot ok 2 - new xid after restart is greater\n\n# Failed test 'new xid after restart is greater'\n# at t/011_crash_recovery.pl line 61.\n# '511'\n# >\n# '511'\nnot ok 3 - xid is aborted after crash\n\n\n\nCraig, it kind of looks to me like you assumed it'd be guaranteed that\nthe xid at this point would show in-progress?\n\nI don't think the use of txid_status() described in the docs added in\nthe commit is actually ever safe?\n\ncommit 857ee8e391ff6654ef9dcc5dd8b658d7709d0a3c\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: 2017-03-24 12:00:53 -0400\n\n Add a txid_status function.\n\n If your connection to the database server is lost while a COMMIT is\n in progress, it may be difficult to figure out whether the COMMIT was\n successful or not. This function will tell you, provided that you\n don't wait too long to ask. It may be useful in other situations,\n too.\n\n Craig Ringer, reviewed by Simon Riggs and by me\n\n Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com\n\n\n+ <para>\n+ <function>txid_status(bigint)</> reports the commit status of a recent\n+ transaction. Applications may use it to determine whether a transaction\n+ committed or aborted when the application and database server become\n+ disconnected while a <literal>COMMIT</literal> is in progress.\n+ The status of a transaction will be reported as either\n+ <literal>in progress</>,\n+ <literal>committed</>, or <literal>aborted</>, provided that the\n+ transaction is recent enough that the system retains the commit status\n+ of that transaction. If is old enough that no references to that\n+ transaction survive in the system and the commit status information has\n+ been discarded, this function will return NULL. Note that prepared\n+ transactions are reported as <literal>in progress</>; applications must\n+ check <link\n+ linkend=\"view-pg-prepared-xacts\"><literal>pg_prepared_xacts</></> if they\n+ need to determine whether the txid is a prepared transaction.\n+ </para>\n\nUntil the commit *has completed*, nothing guarantees that anything\nbearing the transaction's xid has made it to disk. And we surely don't\nwant to force a WAL flush when assigning a transaction id, right?\n\nGreetings,\n\nAndres Freund\n\n\n\n", "msg_date": "Mon, 8 Feb 2021 13:52:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Is txid_status() actually safe? / What is 011_crash_recovery.pl\n testing?" }, { "msg_contents": "On Mon, Feb 8, 2021 at 4:52 PM Andres Freund <andres@anarazel.de> wrote:\n> The 011_crash_recovery.pl test test starts a transaction, creates a\n> table, fetches the transaction's xid. Then shuts down the server in\n> immediate mode. It then asserts that after crash recovery the previously\n> assigned xid is shown as aborted, and that new xids are assigned after\n> the xid.\n>\n> But as far as I can tell there's no guarantee that that is the case.\n\nI think you are right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Feb 2021 15:28:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is txid_status() actually safe? / What is 011_crash_recovery.pl\n testing?" }, { "msg_contents": "On Wed, 10 Feb 2021 at 04:28, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Feb 8, 2021 at 4:52 PM Andres Freund <andres@anarazel.de> wrote:\n> > The 011_crash_recovery.pl test test starts a transaction, creates a\n> > table, fetches the transaction's xid. Then shuts down the server in\n> > immediate mode. It then asserts that after crash recovery the previously\n> > assigned xid is shown as aborted, and that new xids are assigned after\n> > the xid.\n> >\n> > But as far as I can tell there's no guarantee that that is the case.\n>\n> I think you are right.\n>\n>\nAndres, I missed this mail initially. I'll look into it shortly.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 10 Feb 2021 at 04:28, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Feb 8, 2021 at 4:52 PM Andres Freund <andres@anarazel.de> wrote:\n> The 011_crash_recovery.pl test test starts a transaction, creates a\n> table, fetches the transaction's xid. Then shuts down the server in\n> immediate mode. It then asserts that after crash recovery the previously\n> assigned xid is shown as aborted, and that new xids are assigned after\n> the xid.\n>\n> But as far as I can tell there's no guarantee that that is the case.\n\nI think you are right.Andres, I missed this mail initially. I'll look into it shortly.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 5 May 2021 22:37:52 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is txid_status() actually safe? / What is 011_crash_recovery.pl\n testing?" }, { "msg_contents": "On Tue, 9 Feb 2021 at 05:52, Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Craig, it kind of looks to me like you assumed it'd be guaranteed that\n> the xid at this point would show in-progress?\n>\n\nAt the time I wrote that code, I don't think I understood that xid\nassignment wasn't necessarily durable until either (a) the next checkpoint;\nor (b) commit of some txn with a greater xid.\n\nIIRC I expected that after crash and recovery the tx would always be\ntreated as aborted, because the xid had been assigned but no corresponding\ncommit was found before end-of-recovery. No explicit abort records are\nwritten to WAL for such txns since we crashed, but the server's oldest\nin-progress txn threshold is used to determine that they must be aborted\nrather than in-progress even though their clog entries aren't set to\naborted.\n\nWhich was fine as far as it went, but I failed to account for the xid\nassignment not necessarily being durable when the client calls\ntxid_status().\n\n\n> I don't think the use of txid_status() described in the docs added in\n> the commit is actually ever safe?\n>\n\nI agree. The client can query for its xid with txid_current() but as you\nnote there's no guarantee that the assigned xid is durable.\n\nThe client would have to ensure that an xid was assigned, then ensure that\nthe WAL was durably flushed past the point of the xid assignment before\nrelying on the xid.\n\nIf we do a txn that performs a small write, calls txid_current(), and sends\na commit that the server crashes before completing, we can't know for sure\nthat the xid we recorded client-side before the server crash is the same\ntxn we check the status of after crash recovery. Some other txn could've\nre-used the xid after crash so long as no other txn with a greater xid\ndurably committed before the crash.\n\nThat scenario isn't hugely likely, but it's definitely possible on systems\nthat don't do a lot of concurrent txns or do mostly long, heavyweight txns.\n\nThe txid_status() function was originally intended to be paired with a way\nto report topxid assignment to the client automatically, NOTIFY or\nGUC_REPORT-style. But that would not make this usage safe either, unless we\ndelayed the report until WAL was flushed past the LSN of the xid assignment\n*or* some other txn with a greater xid committed.\n\nThis could be made safe with a variant of txid_current() that forced the\nxid assignment to be logged immediately if it was not already, and did not\nreturn until WAL flushed past the point of the assignment. If the client\ndid most of the txn's work before requesting a guaranteed-durable xid, it\nwould in practice not land up having to wait for a flush. But we'd have to\nkeep track of when we assigned the xid in every single topxact in order to\nbe able to promise we'd flushed it without having to immediately force a\nflush. That's pointless overhead all the rest of the time, just in case\nsomeone wants to get an xid for later use with txid_status().\n\nThe simplest option with no overhead on anything that doesn't care about\ntxid_status() is to expose a function to force flush of WAL up to the\ncurrent insert LSN. Then update the docs to say you have to call it after\ntxid_current(), and before sending your commit. But at that point you might\nas well use 2PC, since you're paying the same double flush and double\nround-trip costs. The main point of txid_status() was to avoid the cost of\nthat double-flush.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Tue, 9 Feb 2021 at 05:52, Andres Freund <andres@anarazel.de> wrote:\n\nCraig, it kind of looks to me like you assumed it'd be guaranteed that\nthe xid at this point would show in-progress?At the time I wrote that code, I don't think I understood that xid assignment wasn't necessarily durable until either (a) the next checkpoint; or (b) commit of some txn with a greater xid.IIRC I expected that after crash and recovery the tx would always be treated as aborted, because the xid had been assigned but no corresponding commit was found before end-of-recovery. No explicit abort records are written to WAL for such txns since we crashed, but the server's oldest in-progress txn threshold is used to determine that they must be aborted rather than in-progress even though their clog entries aren't set to aborted.Which was fine as far as it went, but I failed to account for the xid assignment not necessarily being durable when the client calls txid_status().\n\nI don't think the use of txid_status() described in the docs added in\nthe commit is actually ever safe?I agree. The client can query for its xid with txid_current() but as you note there's no guarantee that the assigned xid is durable.The client would have to ensure that an xid was assigned, then ensure that the WAL was durably flushed past the point of the xid assignment before relying on the xid.If we do a txn that performs a small write, calls txid_current(), and sends a commit that the server crashes before completing, we can't know for sure that the xid we recorded client-side before the server crash is the same txn we check the status of after crash recovery. Some other txn could've re-used the xid after crash so long as no other txn with a greater xid durably committed before the crash.That scenario isn't hugely likely, but it's definitely possible on systems that don't do a lot of concurrent txns or do mostly long, heavyweight txns.The txid_status() function was originally intended to be paired with a way to report topxid assignment to the client automatically, NOTIFY or GUC_REPORT-style. But that would not make this usage safe either, unless we delayed the report until WAL was flushed past the LSN of the xid assignment *or* some other txn with a greater xid committed.This could be made safe with a variant of txid_current() that forced the xid assignment to be logged immediately if it was not already, and did not return until WAL flushed past the point of the assignment. If the client did most of the txn's work before requesting a guaranteed-durable xid, it would in practice not land up having to wait for a flush. But we'd have to keep track of when we assigned the xid in every single topxact in order to be able to promise we'd flushed it without having to immediately force a flush. That's pointless overhead all the rest of the time, just in case someone wants to get an xid for later use with txid_status().The simplest option with no overhead on anything that doesn't care about txid_status() is to expose a function to force flush of WAL up to the current insert LSN. Then update the docs to say you have to call it after txid_current(), and before sending your commit. But at that point you might as well use 2PC, since you're paying the same double flush and double round-trip costs. The main point of txid_status() was to avoid the cost of that double-flush.--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 5 May 2021 23:15:53 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is txid_status() actually safe? / What is 011_crash_recovery.pl\n testing?" }, { "msg_contents": "On Wed, 5 May 2021 at 23:15, Craig Ringer <craig@2ndquadrant.com> wrote:\n\n> Which was fine as far as it went, but I failed to account for the xid\n> assignment not necessarily being durable when the client calls\n> txid_status().\n\nAhem, I meant \"when the client calls txid_current()\"\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Wed, 5 May 2021 at 23:15, Craig Ringer <craig@2ndquadrant.com> wrote:> Which was fine as far as it went, but I failed to account for the xid > assignment not necessarily being durable when the client calls > txid_status().Ahem, I meant \"when the client calls txid_current()\"--  Craig Ringer                   http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise", "msg_date": "Wed, 5 May 2021 23:17:13 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Is txid_status() actually safe? / What is 011_crash_recovery.pl\n testing?" } ]
[ { "msg_contents": "Hi, can someone point me to the code that cleans up temp files should a\nquery crashed unexpectedly? Thanks!\n\nHi, can someone point me to the code that cleans up temp files should a query crashed unexpectedly? Thanks!", "msg_date": "Mon, 8 Feb 2021 14:09:51 -0800", "msg_from": "CK Tan <cktan@vitessedata.com>", "msg_from_op": true, "msg_subject": "Clean up code" }, { "msg_contents": "On 09/02/2021 00:09, CK Tan wrote:\n> Hi, can someone point me to the code that cleans up temp files should a \n> query crashed unexpectedly? Thanks!\n\nAutovacuum does that. Search for \"orphan\" in do_autovacuum() function.\n\n- Heikki\n\n\n", "msg_date": "Tue, 9 Feb 2021 13:45:34 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Clean up code" }, { "msg_contents": "On Mon, Feb 8, 2021, at 7:09 PM, CK Tan wrote:\n> Hi, can someone point me to the code that cleans up temp files should a query crashed unexpectedly? Thanks!\nSee this patch [1].\n\n\n[1] https://www.postgresql.org/message-id/CAH503wDKdYzyq7U-QJqGn%3DGm6XmoK%2B6_6xTJ-Yn5WSvoHLY1Ww%40mail.gmail.com\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Feb 8, 2021, at 7:09 PM, CK Tan wrote:Hi, can someone point me to the code that cleans up temp files should a query crashed unexpectedly? Thanks!See this patch [1].[1] https://www.postgresql.org/message-id/CAH503wDKdYzyq7U-QJqGn%3DGm6XmoK%2B6_6xTJ-Yn5WSvoHLY1Ww%40mail.gmail.com--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 09 Feb 2021 22:10:32 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Clean up code" } ]
[ { "msg_contents": "Hi,\n\nAttached is a draft of the release announcement for the upcoming\n2021-02-11 cumulative update release. Please review for technical\naccuracy (I did screen for typos, but would not be surprised if any\nslipped in).\n\nPlease provide feedback on this thread no later than 2020-02-10 AoE[1].\n\nThanks!\n\nJonathan\n\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Mon, 8 Feb 2021 17:40:41 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2021-02-11 release announcement draft" }, { "msg_contents": "On Mon, Feb 08, 2021 at 05:40:41PM -0500, Jonathan S. Katz wrote:\n> This update also fixes over 80 bugs that were reported in the last several\n> months. Some of these issues only affect version 13, but may also apply to other\n> supported versions.\n\nDid you want s/may/many/?\n\n> * Fix an issue with GiST indexes where concurrent insertions could lead to a\n> corrupt index with entries placed in the wrong pages. You should `REINDEX` any\n> affected GiST indexes.\n\nFor what it's worth, there's little way for a user to confirm whether an index\nis affected. (If you've never had more than one connection changing the table\nat a time, the table's indexes would be unaffected.)\n\n> * Fix `CREATE INDEX CONCURRENTLY` to ensure rows from concurrent prepared\n> transactions are included in the index.\n\nConsider adding a sentence like \"Installations that have enabled prepared\ntransactions should `REINDEX` any concurrently-built indexes.\" The release\nnotes say:\n\n+ In installations that have enabled prepared transactions\n+ (<varname>max_prepared_transactions</varname> &gt; 0),\n+ it's recommended to reindex any concurrently-built indexes in\n+ case this problem occurred when they were built.\n\n> * Fix a failure when a PL/pgSQL procedure used `CALL` on another procedure that\n> has `OUT` parameters did not call execute a `COMMIT` or `ROLLBACK`.\n\nThe release notes say the failure happened when the callee _did_ execute a\nCOMMIT or ROLLBACK:\n\n+ <para>\n+ A <command>CALL</command> in a PL/pgSQL procedure, to another\n+ procedure that has OUT parameters, would fail if the called\n+ procedure did a <command>COMMIT</command>\n+ or <command>ROLLBACK</command>.\n+ </para>\n\n> For more details, please see the\n> [release notes](https://www.postgresql.org/docs/current/release.html).\n\nI recommend pointing this to https://www.postgresql.org/docs/release/, since\nthe above link now contains only v13 notes.\n\n\n", "msg_date": "Mon, 8 Feb 2021 15:11:52 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "> On 02/08/2021 11:40 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> \n> \n> Hi,\n> \n> Attached is a draft of the release announcement for the upcoming\n> 2021-02-11 cumulative update release. Please review for technical\n\n'closes fixes' maybe better is:\n'includes fixes' or 'closes bugs'\n\n\n'also fixes over 80 bugs'\nMaybe drop the 'also'; those same 80 bugs have just been mentioned.\n\n\nErik Rijkers\n\n\n", "msg_date": "Tue, 9 Feb 2021 05:30:05 +0100 (CET)", "msg_from": "er@xs4all.nl", "msg_from_op": false, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "On 2/8/21 11:30 PM, er@xs4all.nl wrote:\n>> On 02/08/2021 11:40 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>>\n>> \n>> Hi,\n>>\n>> Attached is a draft of the release announcement for the upcoming\n>> 2021-02-11 cumulative update release. Please review for technical\n> \n> 'closes fixes' maybe better is:\n> 'includes fixes' or 'closes bugs'\n> \n> \n> 'also fixes over 80 bugs'\n> Maybe drop the 'also'; those same 80 bugs have just been mentioned.\n\nThanks for the suggestions. I have included them in the updated draft\nwhich I am posting to Noah's reply.\n\nJonathan", "msg_date": "Wed, 10 Feb 2021 10:14:24 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "On 2/8/21 6:11 PM, Noah Misch wrote:\n> On Mon, Feb 08, 2021 at 05:40:41PM -0500, Jonathan S. Katz wrote:\n>> This update also fixes over 80 bugs that were reported in the last several\n>> months. Some of these issues only affect version 13, but may also apply to other\n>> supported versions.\n> \n> Did you want s/may/many/?\n\nNope. The bugs may also apply to other versions. Maybe to be clearer\nI'll /may/could/?\n\nI made that change.\n\n> \n>> * Fix an issue with GiST indexes where concurrent insertions could lead to a\n>> corrupt index with entries placed in the wrong pages. You should `REINDEX` any\n>> affected GiST indexes.\n> \n> For what it's worth, there's little way for a user to confirm whether an index\n> is affected. (If you've never had more than one connection changing the table\n> at a time, the table's indexes would be unaffected.)\n\nSo Peter Geoghegan and I had a roughly 30 minute back and forth just on\nthis point. Based on our discussion, we felt it best to go with this\nstatement.\n\nI think this one is tough to give guidance around, but I don't think a\nblanket \"anyone who has had concurrent writes to a GiST index should\nreindex\" is the right answer.\n\n>> * Fix `CREATE INDEX CONCURRENTLY` to ensure rows from concurrent prepared\n>> transactions are included in the index.\n> \n> Consider adding a sentence like \"Installations that have enabled prepared\n> transactions should `REINDEX` any concurrently-built indexes.\" The release\n> notes say:\n> \n> + In installations that have enabled prepared transactions\n> + (<varname>max_prepared_transactions</varname> &gt; 0),\n> + it's recommended to reindex any concurrently-built indexes in\n> + case this problem occurred when they were built.\n\nOops, I must have missed that in my first build of the release notes (or\nI just plain missed it). I agree with that.\n\n>> * Fix a failure when a PL/pgSQL procedure used `CALL` on another procedure that\n>> has `OUT` parameters did not call execute a `COMMIT` or `ROLLBACK`.\n> \n> The release notes say the failure happened when the callee _did_ execute a\n> COMMIT or ROLLBACK:\n> \n> + <para>\n> + A <command>CALL</command> in a PL/pgSQL procedure, to another\n> + procedure that has OUT parameters, would fail if the called\n> + procedure did a <command>COMMIT</command>\n> + or <command>ROLLBACK</command>.\n> + </para>\n\nOops, good catch. Fixed.\n\n>> For more details, please see the\n>> [release notes](https://www.postgresql.org/docs/current/release.html).\n> \n> I recommend pointing this to https://www.postgresql.org/docs/release/, since\n> the above link now contains only v13 notes.\n\nWFM.\n\nPlease see updated draft.\n\nThanks,\n\nJonathan", "msg_date": "Wed, 10 Feb 2021 10:15:12 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "On 02/10/21 10:15, Jonathan S. Katz wrote:\n> On 2/8/21 6:11 PM, Noah Misch wrote:\n>> On Mon, Feb 08, 2021 at 05:40:41PM -0500, Jonathan S. Katz wrote:\n>>> Some of these issues only affect version 13, but may also apply to other\n>>> supported versions.\n>>\n>> Did you want s/may/many/?\n> \n> Nope. The bugs may also apply to other versions. Maybe to be clearer\n> I'll /may/could/?\n\nIf that's what was intended, shouldn't it be \"but others may also apply\nto other supported versions\"? ^^^^^^\n\nSurely the ones that \"only affect version 13\" do not affect other versions,\nnot even on a 'may' or 'could' basis.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 10 Feb 2021 10:19:26 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 10, 2021 at 10:15:12AM -0500, Jonathan S. Katz wrote:\n> Please see updated draft.\n\nWhat about the CVEs, it's my understanding that two security issues have\nbeen fixed; shouldn't they be mentioned as well? Or are those scheduled\nto be merged into the announcement at the last minute?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, J�rg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Wed, 10 Feb 2021 16:36:13 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "On 2/10/21 10:19 AM, Chapman Flack wrote:\n> On 02/10/21 10:15, Jonathan S. Katz wrote:\n>> On 2/8/21 6:11 PM, Noah Misch wrote:\n>>> On Mon, Feb 08, 2021 at 05:40:41PM -0500, Jonathan S. Katz wrote:\n>>>> Some of these issues only affect version 13, but may also apply to other\n>>>> supported versions.\n>>>\n>>> Did you want s/may/many/?\n>>\n>> Nope. The bugs may also apply to other versions. Maybe to be clearer\n>> I'll /may/could/?\n> \n> If that's what was intended, shouldn't it be \"but others may also apply\n> to other supported versions\"? ^^^^^^\n> \n> Surely the ones that \"only affect version 13\" do not affect other versions,\n> not even on a 'may' or 'could' basis.\n\nThe main goals of the release announcement are to:\n\n* Let people know there are update releases for supported versions that\nfix bugs.\n* Provide a glimpse at what is fixed so the reader can determine their\nlevel of urgency around updating.\n* Direct people on where to download and find out more specifics about\nthe releases.\n\nI appreciate the suggestions on this sentence, but I don't think the\ndesired goals hinges on this one word.\n\nThanks,\n\nJonathan", "msg_date": "Wed, 10 Feb 2021 10:39:48 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2021-02-11 release announcement draft" }, { "msg_contents": "On Wed, Feb 10, 2021 at 4:36 PM Michael Banck <michael.banck@credativ.de> wrote:\n>\n> Hi,\n>\n> On Wed, Feb 10, 2021 at 10:15:12AM -0500, Jonathan S. Katz wrote:\n> > Please see updated draft.\n>\n> What about the CVEs, it's my understanding that two security issues have\n> been fixed; shouldn't they be mentioned as well? Or are those scheduled\n> to be merged into the announcement at the last minute?\n\nAny potential security announcements are handled independently \"out of band\".\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 10 Feb 2021 16:43:42 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: 2021-02-11 release announcement draft" } ]
[ { "msg_contents": "Hi all,\n\nIn recent history, we have had two bugs causing a crash of the backend\nbecause of the default behavior of rd_tableam to be NULL for a\nrelcache entry for relkinds that have no storage:\n1) Sequential attempt for a view:\nhttps://www.postgresql.org/message-id/16856-0363e05c6e1612fd@postgresql.org\n2) currtid() and currtid2():\nhttps://postgr.es/m/CAJGNTeO93u-5APMga6WH41eTZ3Uee9f3s8dCpA-GSSqNs1b=Ug@mail.gmail.com\n\nAny hole in the code that allows a relation without storage to attempt\nto access a table AM is able to take the server down. Of course, any\ncode doing that would be wrong, but it seems to me that we had better\nput in place better defenses so as any mistake does not result in a\nserver going down. Looking at the code, we would need to do a couple\nof things, mainly:\n- Create a new table AM for relations without storage to plug into.\nThe idea would be a simple wrapper for all the AM functions that\ntriggers a elog(ERROR) for each one of them. If possible, provide\nsome details based on the arguments given by the caller of the\nfunction. Here are some ideas of names: no_storage_table_am,\nno_storage_am, error_table_am, error_am, fallback_am (this one sounds\nwrong). This requires an extra row in pg_am.\n- Tweak the area around RelationInitTableAccessMethod(), with rd_am so\nas rd_amhandler is never NULL.\n\nPutting sanity checks within all the table_* functions of tableam.h\nwould not be a good idea, as nothing prevents the call of what's\nstored in rel->rd_tableam. \n\nThoughts?\n--\nMichael", "msg_date": "Tue, 9 Feb 2021 16:27:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Fallback table AM for relkinds without storage" }, { "msg_contents": "On Tue, Feb 09, 2021 at 04:27:34PM +0900, Michael Paquier wrote:\n> Putting sanity checks within all the table_* functions of tableam.h\n> would not be a good idea, as nothing prevents the call of what's\n> stored in rel->rd_tableam. \n\nI have been playing with this idea, and finished with the attached,\nwhich is not the sexiest patch around. The table AM used as fallback\nfor tables without storage is called no_storage (this could be called\nvirtual_am?). Reverting e786be5 or dd705a0 leads to an error coming\nfrom no_storage instead of a crash.\n\nOne thing to note is that this simplifies a bit slot_callbacks as\nviews, foreign tables and partitioned tables can grab their slot type\ndirectly from this new table AM.\n--\nMichael", "msg_date": "Mon, 15 Feb 2021 16:21:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fallback table AM for relkinds without storage" }, { "msg_contents": "On Mon, Feb 15, 2021 at 04:21:38PM +0900, Michael Paquier wrote:\n> On Tue, Feb 09, 2021 at 04:27:34PM +0900, Michael Paquier wrote:\n> > Putting sanity checks within all the table_* functions of tableam.h\n> > would not be a good idea, as nothing prevents the call of what's\n> > stored in rel->rd_tableam. \n> \n> I have been playing with this idea, and finished with the attached,\n> which is not the sexiest patch around. The table AM used as fallback\n> for tables without storage is called no_storage (this could be called\n> virtual_am?). Reverting e786be5 or dd705a0 leads to an error coming\n> from no_storage instead of a crash.\n\nIf you apply this patch, will you want to actually revert those earlier changes?\n\n> One thing to note is that this simplifies a bit slot_callbacks as\n> views, foreign tables and partitioned tables can grab their slot type\n> directly from this new table AM.\n\nAlso (related), this still crashes if methods are omitted from the initializer,\nlike:\n\n// .slot_callbacks = no_storage_slot_callbacks,\n\nI'm not sure if there's any better way to enforce that's updated when callbacks\nare added.\n\nMost of the methods have Assert( != NULL), so maybe this one is missing?\n\nsrc/backend/access/table/tableamapi.c\nGetTableAmRoutine(Oid amhandler)\n...\n\tAssert(routine->slot_callbacks != NULL);\n\nSee also\nhttps://www.postgresql.org/message-id/CALfoeisgdZhYDrJOukaBzvXfJOK2FQ0szVMK7dzmcy6w93iDUA%40mail.gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Feb 2021 09:43:59 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fallback table AM for relkinds without storage" }, { "msg_contents": "On Sun, Feb 21, 2021 at 09:43:59AM -0600, Justin Pryzby wrote:\n> If you apply this patch, will you want to actually revert those\n> earlier changes?\n\nThat's not in the plan.\n\n> Also (related), this still crashes if methods are omitted from the initializer,\n> like:\n> \n> // .slot_callbacks = no_storage_slot_callbacks,\n> \n> I'm not sure if there's any better way to enforce that's updated when callbacks\n> are added.\n> \n> Most of the methods have Assert( != NULL), so maybe this one is missing?\n> \n> src/backend/access/table/tableamapi.c\n> GetTableAmRoutine(Oid amhandler)\n> ...\n> \tAssert(routine->slot_callbacks != NULL);\n\nGood point, that looks like an omission. Even if the code tries to\nlook after the slot type for a view, foreign table or partitioned\ntable, this cannot be NULL.\n--\nMichael", "msg_date": "Mon, 22 Feb 2021 16:37:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fallback table AM for relkinds without storage" }, { "msg_contents": "Hi,\n\nOn 2021-02-15 16:21:38 +0900, Michael Paquier wrote:\n> I have been playing with this idea, and finished with the attached,\n> which is not the sexiest patch around. The table AM used as fallback\n> for tables without storage is called no_storage (this could be called\n> virtual_am?).\n\n\n> One thing to note is that this simplifies a bit slot_callbacks as\n> views, foreign tables and partitioned tables can grab their slot type\n> directly from this new table AM.\n\nThis doesn't seem like an advantage to me. Isn't this just pushing logic\naway from a fairly obvious point into an AM that one would expect to\nnever actually get called?\n\n\n> +static const TupleTableSlotOps *\n> +no_storage_slot_callbacks(Relation relation)\n> +{\n> +\tif (relation->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n> +\t{\n> +\t\t/*\n> +\t\t * Historically FDWs expect to store heap tuples in slots. Continue\n> +\t\t * handing them one, to make it less painful to adapt FDWs to new\n> +\t\t * versions. The cost of a heap slot over a virtual slot is pretty\n> +\t\t * small.\n> +\t\t */\n> +\t\treturn &TTSOpsHeapTuple;\n> +\t}\n> +\n> +\t/*\n> +\t * These need to be supported, as some parts of the code (like COPY) need\n> +\t * to create slots for such relations too. It seems better to centralize\n> +\t * the knowledge that a heap slot is the right thing in that case here.\n> +\t */\n> +\tif (relation->rd_rel->relkind != RELKIND_VIEW &&\n> +\t\trelation->rd_rel->relkind != RELKIND_PARTITIONED_TABLE)\n> +\t\telog(ERROR, \"no_storage_slot_callbacks() failed on relation \\\"%s\\\"\",\n> +\t\t\t RelationGetRelationName(relation));\n> +\treturn &TTSOpsVirtual;\n> +}\n\nIf we want to go down this path what's the justification for have\nrelkind awareness inside the AM? If we want it, we should give FDWs\ntheir own tableam. And we should do the same for sequences (that'd imo\nbe a much nicer improvement than this change in itself).\n\nIf we were to go for separate AMs I think we could consider implementing\nmost of their functionality in one file, to avoid needing to duplicate\nthe functions that error out.\n\nAnd I'd vote for not naming it no_storage - to me that sounds like a\nname you'd give \"blackhole_am\". This concept kinda reminds me of\npseudotypes - so maybe we should just name it pseudo_am.c or such?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Feb 2021 17:19:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fallback table AM for relkinds without storage" }, { "msg_contents": "On Mon, Feb 22, 2021 at 05:19:37PM -0800, Andres Freund wrote:\n> This doesn't seem like an advantage to me. Isn't this just pushing logic\n> away from a fairly obvious point into an AM that one would expect to\n> never actually get called?\n> \n> If we want to go down this path what's the justification for have\n> relkind awareness inside the AM? If we want it, we should give FDWs\n> their own tableam.\n\nAgreed, I am not completely comfortable with passing down any\nknowledge of the relkind down to the AM itself.\n\n> And we should do the same for sequences (that'd imo be a much nicer\n> improvement than this change in itself).\n\nSequences just use the existing heap AM, so you mean to drop from\nrelcache.c anything specific to sequences when initializing the\nrelation cache and set pg_class.relam accordingly, right? That makes\nsense for consistency with the rest.\n\n> If we were to go for separate AMs I think we could consider implementing\n> most of their functionality in one file, to avoid needing to duplicate\n> the functions that error out.\n\nYep, definitely. No issues with that.\n\n> And I'd vote for not naming it no_storage - to me that sounds like a\n> name you'd give \"blackhole_am\". This concept kinda reminds me of\n> pseudotypes - so maybe we should just name it pseudo_am.c or such?\n\nFor the file name, using something like pseudo_handler.c or similar\nwould be fine, I guess. However, if we go down the path of one AM per\nrelkind for the slot callback, then why not just calling the AMs\nforeign_table_am, view_am and part_table_am? This could be coupled\nwith sanity checks to make sure that AMs assigned to those relations\nare the expected ones.\n\nblackhole_am is not the best fit for that IMO. It already exists, but\nI would be fine to change this code, of course:\nhttps://github.com/michaelpq/pg_plugins/tree/master/blackhole_am\n--\nMichael", "msg_date": "Wed, 24 Feb 2021 11:51:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fallback table AM for relkinds without storage" }, { "msg_contents": "On Wed, Feb 24, 2021 at 11:51:36AM +0900, Michael Paquier wrote:\n> For the file name, using something like pseudo_handler.c or similar\n> would be fine, I guess. However, if we go down the path of one AM per\n> relkind for the slot callback, then why not just calling the AMs\n> foreign_table_am, view_am and part_table_am? This could be coupled\n> with sanity checks to make sure that AMs assigned to those relations\n> are the expected ones.\n\nI am still not quite sure what needs to be done here and this needs\nmore thoughts, so this has been marked as returned with feedback for\nnow. Instead of pushing forward with this patch, I'll just spend more\ncycles on stuff that has more chances to make it into 14.\n--\nMichael", "msg_date": "Sun, 14 Mar 2021 20:59:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Fallback table AM for relkinds without storage" } ]
[ { "msg_contents": "Several information schema views track dependencies between \nfunctions/procedures and objects used by them. These had not been\nimplemented so far because PostgreSQL doesn't track objects used in a\nfunction body. However, formally, these also show dependencies used\nin parameter default expressions, which PostgreSQL does support and\ntrack. So for the sake of completeness, we might as well add these.\nIf dependency tracking for function bodies is ever implemented, these\nviews will automatically work correctly.\n\nI developed this as part of the patch \"SQL-standard function body\", \nwhere it would become more useful, but I'm sending it now separately to \nnot bloat the other patch further.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/1c11f1eb-f00c-43b7-799d-2d44132c02d7@2ndquadrant.com", "msg_date": "Tue, 9 Feb 2021 15:06:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Routine usage information schema tables" }, { "msg_contents": "> On 02/09/2021 3:06 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> \n> Several information schema views track dependencies between \n> functions/procedures and objects used by them. These had not been\n\n> [0001-Routine-usage-information-schema-tables.patch]\n\nSpotted one typo:\n\nincluded here ony if\nincluded here only if\n\n\nErik Rijkers\n\n\n", "msg_date": "Tue, 9 Feb 2021 15:25:50 +0100 (CET)", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": false, "msg_subject": "Re: Routine usage information schema tables" } ]
[ { "msg_contents": "Hi,\n\nCommit 3b733fcd04 caused the buildfarm member \"rorqual\" to report\nthe following error and fail the regression test. The cause of this issue\nis a bug in that commit.\n\n ERROR: invalid spinlock number: 0\n\nBut while investigating the issue, I found that this error could happen\neven in the current master (without commit 3b733fcd04). The error can\nbe easily reproduced by reading pg_stat_wal_receiver view before\nwalreceiver starts up, in the server built with --disable-atomics --disable-spinlocks.\nFurthermore if you try to read pg_stat_wal_receiver again,\nthat gets stuck. This is not good.\n\nISTM that the commit 2c8dd05d6c caused this issue. The commit changed\npg_stat_get_wal_receiver() so that it reads \"writtenUpto\" by using\npg_atomic_read_u64(). But since \"writtenUpto\" is initialized only when\nwalreceiver starts up, reading \"writtenUpto\" before the startup of\nwalreceiver can cause the error.\n\nAlso pg_stat_get_wal_receiver() calls pg_atomic_read_u64() while\na spinlock is being held. Maybe this may cause the process to get stuck\nin --disable-atomics case, I guess.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 9 Feb 2021 23:17:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "ERROR: invalid spinlock number: 0" }, { "msg_contents": "Hi Fujii-san,\n\nOn Tue, Feb 09, 2021 at 11:17:04PM +0900, Fujii Masao wrote:\n> ISTM that the commit 2c8dd05d6c caused this issue. The commit changed\n> pg_stat_get_wal_receiver() so that it reads \"writtenUpto\" by using\n> pg_atomic_read_u64(). But since \"writtenUpto\" is initialized only when\n> walreceiver starts up, reading \"writtenUpto\" before the startup of\n> walreceiver can cause the error.\n\nIndeed, that's a problem. We should at least move that out of the\nspin lock area. I'll try to look at that in details, and that's going\nto take me a couple of days at least. Sorry for the delay.\n--\nMichael", "msg_date": "Thu, 11 Feb 2021 21:55:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On 2021/02/11 21:55, Michael Paquier wrote:\n> Hi Fujii-san,\n> \n> On Tue, Feb 09, 2021 at 11:17:04PM +0900, Fujii Masao wrote:\n>> ISTM that the commit 2c8dd05d6c caused this issue. The commit changed\n>> pg_stat_get_wal_receiver() so that it reads \"writtenUpto\" by using\n>> pg_atomic_read_u64(). But since \"writtenUpto\" is initialized only when\n>> walreceiver starts up, reading \"writtenUpto\" before the startup of\n>> walreceiver can cause the error.\n> \n> Indeed, that's a problem. We should at least move that out of the\n> spin lock area.\n\nYes, so what about the attached patch?\n\nWe didn't notice this issue long time because no regression test checks\npg_stat_wal_receiver. So I included such test in the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 11 Feb 2021 23:30:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On Thu, Feb 11, 2021 at 11:30:13PM +0900, Fujii Masao wrote:\n> Yes, so what about the attached patch?\n\nI see. So the first error triggering the spinlock error would cause\na transaction failure because the fallback implementation of atomics\nuses a spinlock for this variable, and it may not initialized in this\ncode path.\n\n> We didn't notice this issue long time because no regression test checks\n> pg_stat_wal_receiver. So I included such test in the patch.\n\nMoving that behind ready_to_display is fine by me seeing where the\ninitialization is done. The test case is a good addition.\n\n+ * Read \"writtenUpto\" without holding a spinlock. So it may not be\n+ * consistent with other WAL receiver's shared variables protected by a\n+ * spinlock. This is OK because that variable is used only for\n+ * informational purpose and should not be used for data integrity checks.\nIt seems to me that the first two sentences of this comment should be\ncombined together.\n--\nMichael", "msg_date": "Mon, 15 Feb 2021 17:27:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On Mon, Feb 15, 2021 at 9:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Feb 11, 2021 at 11:30:13PM +0900, Fujii Masao wrote:\n> > Yes, so what about the attached patch?\n>\n> I see. So the first error triggering the spinlock error would cause\n> a transaction failure because the fallback implementation of atomics\n> uses a spinlock for this variable, and it may not initialized in this\n> code path.\n\nWhy not initialise it in WalRcvShmemInit()?\n\n\n", "msg_date": "Mon, 15 Feb 2021 22:47:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On Mon, Feb 15, 2021 at 10:47:05PM +1300, Thomas Munro wrote:\n> Why not initialise it in WalRcvShmemInit()?\n\nI was thinking about doing that as well, but we have no real need to\ninitialize this stuff in most cases, say standalone deployments. In\nparticular for the fallback implementation of atomics, we would\nprepare a spinlock for nothing.\n--\nMichael", "msg_date": "Mon, 15 Feb 2021 19:45:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "\n\nOn 2021/02/15 19:45, Michael Paquier wrote:\n> On Mon, Feb 15, 2021 at 10:47:05PM +1300, Thomas Munro wrote:\n>> Why not initialise it in WalRcvShmemInit()?\n> \n> I was thinking about doing that as well, but we have no real need to\n> initialize this stuff in most cases, say standalone deployments. In\n> particular for the fallback implementation of atomics, we would\n> prepare a spinlock for nothing.\n\nBut on second thought, if we make WalRceiverMain() call pg_atomic_init_u64(),\nthe variable is initialized (i,e., SpinLockInit() is called in --disable-atomics)\nevery time walreceiver is started. That may be problematic? If so, the variable\nneeds to be initialized in WalRcvShmemInit(), instead.\n\nBTW, the recent commit 46d6e5f567 has the similar issue. The variable\nthat commit added is initialized in InitProcess(), but maybe should be done\nin InitProcGlobal() or elsewhere.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 15 Feb 2021 20:49:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "Hi,\n\nOn 2021-02-15 19:45:21 +0900, Michael Paquier wrote:\n> On Mon, Feb 15, 2021 at 10:47:05PM +1300, Thomas Munro wrote:\n> > Why not initialise it in WalRcvShmemInit()?\n> \n> I was thinking about doing that as well, but we have no real need to\n> initialize this stuff in most cases, say standalone deployments. In\n> particular for the fallback implementation of atomics, we would\n> prepare a spinlock for nothing.\n\nSo what? It's just about free to initialize a spinlock, whether it's\nusing the fallback implementation or not. Initializing upon walsender\nstartup adds a lot of complications, because e.g. somebody could already\nhold the spinlock because the previous walsender just disconnected, and\nthey were looking at the stats.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Feb 2021 13:28:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On 2021/02/16 6:28, Andres Freund wrote:\n> Hi,\n> \n> On 2021-02-15 19:45:21 +0900, Michael Paquier wrote:\n>> On Mon, Feb 15, 2021 at 10:47:05PM +1300, Thomas Munro wrote:\n>>> Why not initialise it in WalRcvShmemInit()?\n>>\n>> I was thinking about doing that as well, but we have no real need to\n>> initialize this stuff in most cases, say standalone deployments. In\n>> particular for the fallback implementation of atomics, we would\n>> prepare a spinlock for nothing.\n> \n> So what? It's just about free to initialize a spinlock, whether it's\n> using the fallback implementation or not. Initializing upon walsender\n> startup adds a lot of complications, because e.g. somebody could already\n> hold the spinlock because the previous walsender just disconnected, and\n> they were looking at the stats.\n\nEven if we initialize \"writtenUpto\" in WalRcvShmemInit(), WalReceiverMain()\nstill needs to initialize (reset to 0) by using pg_atomic_write_u64().\n\nBasically we should not acquire new spinlock while holding another spinlock,\nto shorten the spinlock duration. Right? If yes, we need to move\npg_atomic_read_u64() of \"writtenUpto\" after the release of spinlock in\npg_stat_get_wal_receiver.\n\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 16 Feb 2021 12:43:42 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On Tue, Feb 16, 2021 at 12:43:42PM +0900, Fujii Masao wrote:\n> On 2021/02/16 6:28, Andres Freund wrote:\n>> So what? It's just about free to initialize a spinlock, whether it's\n>> using the fallback implementation or not. Initializing upon walsender\n>> startup adds a lot of complications, because e.g. somebody could already\n>> hold the spinlock because the previous walsender just disconnected, and\n>> they were looking at the stats.\n\nOkay.\n\n> Even if we initialize \"writtenUpto\" in WalRcvShmemInit(), WalReceiverMain()\n> still needs to initialize (reset to 0) by using pg_atomic_write_u64().\n\nYes, you have to do that.\n\n> Basically we should not acquire new spinlock while holding another spinlock,\n> to shorten the spinlock duration. Right? If yes, we need to move\n> pg_atomic_read_u64() of \"writtenUpto\" after the release of spinlock in\n> pg_stat_get_wal_receiver.\n\nIt would not matter much as a NULL tuple is returned as long as the\nWAL receiver information is not ready to be displayed. The only\nreason why all the fields are read before checking for\nready_to_display is that we can be sure that everything is consistent\nwith the PID. So reading writtenUpto before or after does not really\nmatter logically. I would just move it after the check, as you did\npreviously.\n\n+ /*\n+ * Read \"writtenUpto\" without holding a spinlock. So it may not be\n+ * consistent with other WAL receiver's shared variables protected by a\n+ * spinlock. This is OK because that variable is used only for\n+ * informational purpose and should not be used for data integrity checks.\n+ */\nWhat about the following?\n\"Read \"writtenUpto\" without holding a spinlock. Note that it may not\nbe consistent with the other shared variables of the WAL receiver\nprotected by a spinlock, but this should not be used for data\nintegrity checks.\"\n\nI agree that what has been done with MyProc->waitStart in 46d6e5f is\nnot safe, and that initialization should happen once at postmaster\nstartup, with a write(0) when starting the backend. There are two of\nthem in proc.c, one in twophase.c. Do you mind if I add an open item\nfor this one?\n--\nMichael", "msg_date": "Tue, 16 Feb 2021 15:50:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On 2021/02/16 15:50, Michael Paquier wrote:\n> On Tue, Feb 16, 2021 at 12:43:42PM +0900, Fujii Masao wrote:\n>> On 2021/02/16 6:28, Andres Freund wrote:\n>>> So what? It's just about free to initialize a spinlock, whether it's\n>>> using the fallback implementation or not. Initializing upon walsender\n>>> startup adds a lot of complications, because e.g. somebody could already\n>>> hold the spinlock because the previous walsender just disconnected, and\n>>> they were looking at the stats.\n> \n> Okay.\n> \n>> Even if we initialize \"writtenUpto\" in WalRcvShmemInit(), WalReceiverMain()\n>> still needs to initialize (reset to 0) by using pg_atomic_write_u64().\n> \n> Yes, you have to do that.\n> \n>> Basically we should not acquire new spinlock while holding another spinlock,\n>> to shorten the spinlock duration. Right? If yes, we need to move\n>> pg_atomic_read_u64() of \"writtenUpto\" after the release of spinlock in\n>> pg_stat_get_wal_receiver.\n> \n> It would not matter much as a NULL tuple is returned as long as the\n> WAL receiver information is not ready to be displayed. The only\n> reason why all the fields are read before checking for\n> ready_to_display is that we can be sure that everything is consistent\n> with the PID. So reading writtenUpto before or after does not really\n> matter logically. I would just move it after the check, as you did\n> previously.\n\nOK.\n\n> \n> + /*\n> + * Read \"writtenUpto\" without holding a spinlock. So it may not be\n> + * consistent with other WAL receiver's shared variables protected by a\n> + * spinlock. This is OK because that variable is used only for\n> + * informational purpose and should not be used for data integrity checks.\n> + */\n> What about the following?\n> \"Read \"writtenUpto\" without holding a spinlock. Note that it may not\n> be consistent with the other shared variables of the WAL receiver\n> protected by a spinlock, but this should not be used for data\n> integrity checks.\"\n\nSounds good. Attached is the updated version of the patch.\n\n> \n> I agree that what has been done with MyProc->waitStart in 46d6e5f is\n> not safe, and that initialization should happen once at postmaster\n> startup, with a write(0) when starting the backend. There are two of\n> them in proc.c, one in twophase.c. Do you mind if I add an open item\n> for this one?\n\nYeah, please feel free to do that! BTW, I already posted the patch\naddressing that issue, at [1].\n\n[1] https://postgr.es/m/1df88660-6f08-cc6e-b7e2-f85296a2bdab@oss.nttdata.com\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 16 Feb 2021 23:47:52 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "On Tue, Feb 16, 2021 at 11:47:52PM +0900, Fujii Masao wrote:\n> On 2021/02/16 15:50, Michael Paquier wrote:\n>> + /*\n>> + * Read \"writtenUpto\" without holding a spinlock. So it may not be\n>> + * consistent with other WAL receiver's shared variables protected by a\n>> + * spinlock. This is OK because that variable is used only for\n>> + * informational purpose and should not be used for data integrity checks.\n>> + */\n>> What about the following?\n>> \"Read \"writtenUpto\" without holding a spinlock. Note that it may not\n>> be consistent with the other shared variables of the WAL receiver\n>> protected by a spinlock, but this should not be used for data\n>> integrity checks.\"\n> \n> Sounds good. Attached is the updated version of the patch.\n\nThanks, looks good to me.\n\n>> I agree that what has been done with MyProc->waitStart in 46d6e5f is\n>> not safe, and that initialization should happen once at postmaster\n>> startup, with a write(0) when starting the backend. There are two of\n>> them in proc.c, one in twophase.c. Do you mind if I add an open item\n>> for this one?\n> \n> Yeah, please feel free to do that! BTW, I already posted the patch\n> addressing that issue, at [1].\n\nOkay, item added with a link to the original thread.\n--\nMichael", "msg_date": "Wed, 17 Feb 2021 13:52:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: invalid spinlock number: 0" }, { "msg_contents": "\n\nOn 2021/02/17 13:52, Michael Paquier wrote:\n> On Tue, Feb 16, 2021 at 11:47:52PM +0900, Fujii Masao wrote:\n>> On 2021/02/16 15:50, Michael Paquier wrote:\n>>> + /*\n>>> + * Read \"writtenUpto\" without holding a spinlock. So it may not be\n>>> + * consistent with other WAL receiver's shared variables protected by a\n>>> + * spinlock. This is OK because that variable is used only for\n>>> + * informational purpose and should not be used for data integrity checks.\n>>> + */\n>>> What about the following?\n>>> \"Read \"writtenUpto\" without holding a spinlock. Note that it may not\n>>> be consistent with the other shared variables of the WAL receiver\n>>> protected by a spinlock, but this should not be used for data\n>>> integrity checks.\"\n>>\n>> Sounds good. Attached is the updated version of the patch.\n> \n> Thanks, looks good to me.\n\nPushed. Thanks!\n\n\n> \n>>> I agree that what has been done with MyProc->waitStart in 46d6e5f is\n>>> not safe, and that initialization should happen once at postmaster\n>>> startup, with a write(0) when starting the backend. There are two of\n>>> them in proc.c, one in twophase.c. Do you mind if I add an open item\n>>> for this one?\n>>\n>> Yeah, please feel free to do that! BTW, I already posted the patch\n>> addressing that issue, at [1].\n> \n> Okay, item added with a link to the original thread.\n\nThanks!\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Feb 2021 23:32:16 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: invalid spinlock number: 0" } ]
[ { "msg_contents": "Hi,\n\nI personally use it as a checksum for a large unordered set, where \nperformance and simplicity is prioritized over collision resilience.\nMaybe there are other ways to use them.\n\nBest, Alex", "msg_date": "Tue, 9 Feb 2021 15:25:19 +0000", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "[patch] bit XOR aggregate functions" }, { "msg_contents": "At Tue, 9 Feb 2021 15:25:19 +0000, Alexey Bashtanov <bashtanov@imap.cc> wrote in \n> I personally use it as a checksum for a large unordered set, where\n> performance and simplicity is prioritized over collision resilience.\n> Maybe there are other ways to use them.\n\nFWIW the BIT_XOR can be created using CREATE AGGREGATE.\n\nCREATE OR REPLACE AGGREGATE BIT_XOR(IN v smallint) (SFUNC = int2xor, STYPE = smallint);\nCREATE OR REPLACE AGGREGATE BIT_XOR(IN v int4) (SFUNC = int4xor, STYPE = int4);\nCREATE OR REPLACE AGGREGATE BIT_XOR(IN v bigint) (SFUNC = int8xor, STYPE = bigint);\nCREATE OR REPLACE AGGREGATE BIT_XOR(IN v bit) (SFUNC = bitxor, STYPE = bit);\n\nThe bit_and/bit_or aggregates are back to 2004, that commit says that:\n\n> commit 8096fe45cee42ce02e602cbea08e969139a77455\n> Author: Bruce Momjian <bruce@momjian.us>\n> Date: Wed May 26 15:26:28 2004 +0000\n...\n> (2) bitwise integer aggregates named bit_and and bit_or for\n> int2, int4, int8 and bit types. They are not standard, but I find\n> them useful. I needed them once.\n\nWe already had CREATE AGGREATE at the time, so BIT_XOR can be thought\nas it falls into the same category with BIT_AND and BIT_OR, that is,\nwe may have BIT_XOR as an intrinsic aggregation?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 10 Feb 2021 14:42:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n> We already had CREATE AGGREATE at the time, so BIT_XOR can be thought\n> as it falls into the same category with BIT_AND and BIT_OR, that is,\n> we may have BIT_XOR as an intrinsic aggregation?\n\nI think the use of BIT_XOR is quite separate from BIT_AND and BIT_OR. \nThe latter give you an \"all\" or \"any\" result of the bits set. BIT_XOR \nwill return 1 or true if an odd number of inputs are 1 or true, which \nisn't useful by itself. But it can be used as a checksum, so it seems \npretty reasonable to me to add it. Perhaps the use case could be \npointed out in the documentation.\n\n\n\n", "msg_date": "Wed, 3 Mar 2021 15:30:15 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On Wed, Mar 3, 2021 at 7:30 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n> > We already had CREATE AGGREATE at the time, so BIT_XOR can be thought\n> > as it falls into the same category with BIT_AND and BIT_OR, that is,\n> > we may have BIT_XOR as an intrinsic aggregation?\n>\n> I think the use of BIT_XOR is quite separate from BIT_AND and BIT_OR.\n> The latter give you an \"all\" or \"any\" result of the bits set. BIT_XOR\n> will return 1 or true if an odd number of inputs are 1 or true, which\n> isn't useful by itself. But it can be used as a checksum, so it seems\n> pretty reasonable to me to add it. Perhaps the use case could be\n> pointed out in the documentation.\n>\n>\n>\n>\nHi Alex,\n\n\nThe patch is failing just because of a comment, which is already changed by\nanother patch\n\n-/* Define to build with OpenSSL support. (--with-ssl=openssl) */\n\n+/* Define to 1 if you have OpenSSL support. */\n\nDo you mind sending an updated patch?\n\nhttp://cfbot.cputube.org/patch_32_2980.log.\n\nI am changing the status to \"Waiting for Author\"\n\n\nIn my opinion that change no more requires so I removed that and attached\nthe patch.\n\n-- \nIbrar Ahmed", "msg_date": "Thu, 4 Mar 2021 22:14:26 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "Hi all,\n\nThanks for your reviews.\nI've updated my patch to the current master and added a documentation \nline suggesting using the new function as a checksum.\n\nBest regards, Alex\n\nOn 04/03/2021 17:14, Ibrar Ahmed wrote:\n>\n>\n> On Wed, Mar 3, 2021 at 7:30 PM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n>\n> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n> > We already had CREATE AGGREATE at the time, so BIT_XOR can be\n> thought\n> > as it falls into the same category with BIT_AND and BIT_OR, that is,\n> > we may have BIT_XOR as an intrinsic aggregation?\n>\n> I think the use of BIT_XOR is quite separate from BIT_AND and BIT_OR.\n> The latter give you an \"all\" or \"any\" result of the bits set. \n> BIT_XOR\n> will return 1 or true if an odd number of inputs are 1 or true, which\n> isn't useful by itself.  But it can be used as a checksum, so it\n> seems\n> pretty reasonable to me to add it.  Perhaps the use case could be\n> pointed out in the documentation.\n>\n>\n>\n>\n> Hi Alex,\n>\n> The patch is failing just because of a comment, which is already \n> changed by another patch\n>\n> -/* Define to build with OpenSSL support. (--with-ssl=openssl) */\n>\n> +/* Define to 1 if you have OpenSSL support. */\n>\n>\n> Do you mind sending an updated patch?\n>\n> http://cfbot.cputube.org/patch_32_2980.log.\n>\n> I am changing the status to \"Waiting for Author\"\n>\n>\n> In my opinion that change no more requires so I removed that and \n> attached the patch.\n>\n> -- \n> Ibrar Ahmed", "msg_date": "Fri, 5 Mar 2021 12:42:55 +0000", "msg_from": "Alexey Bashtanov <bashtanov@imap.cc>", "msg_from_op": true, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 05.03.21 13:42, Alexey Bashtanov wrote:\n> Thanks for your reviews.\n> I've updated my patch to the current master and added a documentation \n> line suggesting using the new function as a checksum.\n\ncommitted\n\n\n", "msg_date": "Sat, 6 Mar 2021 19:37:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On Wed, Mar 03, 2021 at 03:30:15PM +0100, Peter Eisentraut wrote:\n> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n> > We already had CREATE AGGREATE at the time, so BIT_XOR can be\n> > thought as it falls into the same category with BIT_AND and\n> > BIT_OR, that is, we may have BIT_XOR as an intrinsic aggregation?\n> \n> I think the use of BIT_XOR is quite separate from BIT_AND and\n> BIT_OR. The latter give you an \"all\" or \"any\" result of the bits\n> set. BIT_XOR will return 1 or true if an odd number of inputs are 1\n> or true, which isn't useful by itself. But it can be used as a\n> checksum, so it seems pretty reasonable to me to add it. Perhaps\n> the use case could be pointed out in the documentation.\n\nIf this is the only use case, is there some way to refuse to execute\nit if it doesn't contain an unambiguous ORDER BY, as illustrated\nbelow?\n\n SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n SELECT BIT_XOR(b) FROM... /* errors out */\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 6 Mar 2021 19:55:54 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/6/21 8:55 PM, David Fetter wrote:\n> On Wed, Mar 03, 2021 at 03:30:15PM +0100, Peter Eisentraut wrote:\n>> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n>>> We already had CREATE AGGREATE at the time, so BIT_XOR can be\n>>> thought as it falls into the same category with BIT_AND and\n>>> BIT_OR, that is, we may have BIT_XOR as an intrinsic aggregation?\n>>\n>> I think the use of BIT_XOR is quite separate from BIT_AND and\n>> BIT_OR. The latter give you an \"all\" or \"any\" result of the bits\n>> set. BIT_XOR will return 1 or true if an odd number of inputs are 1\n>> or true, which isn't useful by itself. But it can be used as a\n>> checksum, so it seems pretty reasonable to me to add it. Perhaps\n>> the use case could be pointed out in the documentation.\n> \n> If this is the only use case, is there some way to refuse to execute\n> it if it doesn't contain an unambiguous ORDER BY, as illustrated\n> below?\n> \n> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> SELECT BIT_XOR(b) FROM... /* errors out */\n\n\nWhy would such an error be necessary, or even desirable?\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 6 Mar 2021 20:57:46 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On Sat, Mar 06, 2021 at 08:57:46PM +0100, Vik Fearing wrote:\n> On 3/6/21 8:55 PM, David Fetter wrote:\n> > On Wed, Mar 03, 2021 at 03:30:15PM +0100, Peter Eisentraut wrote:\n> >> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n> >>> We already had CREATE AGGREATE at the time, so BIT_XOR can be\n> >>> thought as it falls into the same category with BIT_AND and\n> >>> BIT_OR, that is, we may have BIT_XOR as an intrinsic aggregation?\n> >>\n> >> I think the use of BIT_XOR is quite separate from BIT_AND and\n> >> BIT_OR. The latter give you an \"all\" or \"any\" result of the bits\n> >> set. BIT_XOR will return 1 or true if an odd number of inputs are 1\n> >> or true, which isn't useful by itself. But it can be used as a\n> >> checksum, so it seems pretty reasonable to me to add it. Perhaps\n> >> the use case could be pointed out in the documentation.\n> > \n> > If this is the only use case, is there some way to refuse to execute\n> > it if it doesn't contain an unambiguous ORDER BY, as illustrated\n> > below?\n> > \n> > SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n> > SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> > SELECT BIT_XOR(b) FROM... /* errors out */\n> \n> \n> Why would such an error be necessary, or even desirable?\n\nBecause there is no way to ensure that the results remain consistent\nfrom one execution to the next without such a guarantee.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 6 Mar 2021 20:00:24 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/6/21 9:00 PM, David Fetter wrote:\n> On Sat, Mar 06, 2021 at 08:57:46PM +0100, Vik Fearing wrote:\n>> On 3/6/21 8:55 PM, David Fetter wrote:\n>>> On Wed, Mar 03, 2021 at 03:30:15PM +0100, Peter Eisentraut wrote:\n>>>> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n>>>>> We already had CREATE AGGREATE at the time, so BIT_XOR can be\n>>>>> thought as it falls into the same category with BIT_AND and\n>>>>> BIT_OR, that is, we may have BIT_XOR as an intrinsic aggregation?\n>>>>\n>>>> I think the use of BIT_XOR is quite separate from BIT_AND and\n>>>> BIT_OR. The latter give you an \"all\" or \"any\" result of the bits\n>>>> set. BIT_XOR will return 1 or true if an odd number of inputs are 1\n>>>> or true, which isn't useful by itself. But it can be used as a\n>>>> checksum, so it seems pretty reasonable to me to add it. Perhaps\n>>>> the use case could be pointed out in the documentation.\n>>>\n>>> If this is the only use case, is there some way to refuse to execute\n>>> it if it doesn't contain an unambiguous ORDER BY, as illustrated\n>>> below?\n>>>\n>>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n>>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n>>> SELECT BIT_XOR(b) FROM... /* errors out */\n>>\n>>\n>> Why would such an error be necessary, or even desirable?\n> \n> Because there is no way to ensure that the results remain consistent\n> from one execution to the next without such a guarantee.\n\nI think one of us is forgetting how XOR works.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 6 Mar 2021 21:03:25 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On Sat, Mar 06, 2021 at 09:03:25PM +0100, Vik Fearing wrote:\n> On 3/6/21 9:00 PM, David Fetter wrote:\n> > On Sat, Mar 06, 2021 at 08:57:46PM +0100, Vik Fearing wrote:\n> >> On 3/6/21 8:55 PM, David Fetter wrote:\n> >>> On Wed, Mar 03, 2021 at 03:30:15PM +0100, Peter Eisentraut wrote:\n> >>>> On 10.02.21 06:42, Kyotaro Horiguchi wrote:\n> >>>>> We already had CREATE AGGREATE at the time, so BIT_XOR can be\n> >>>>> thought as it falls into the same category with BIT_AND and\n> >>>>> BIT_OR, that is, we may have BIT_XOR as an intrinsic aggregation?\n> >>>>\n> >>>> I think the use of BIT_XOR is quite separate from BIT_AND and\n> >>>> BIT_OR. The latter give you an \"all\" or \"any\" result of the bits\n> >>>> set. BIT_XOR will return 1 or true if an odd number of inputs are 1\n> >>>> or true, which isn't useful by itself. But it can be used as a\n> >>>> checksum, so it seems pretty reasonable to me to add it. Perhaps\n> >>>> the use case could be pointed out in the documentation.\n> >>>\n> >>> If this is the only use case, is there some way to refuse to execute\n> >>> it if it doesn't contain an unambiguous ORDER BY, as illustrated\n> >>> below?\n> >>>\n> >>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n> >>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> >>> SELECT BIT_XOR(b) FROM... /* errors out */\n> >>\n> >>\n> >> Why would such an error be necessary, or even desirable?\n> > \n> > Because there is no way to ensure that the results remain consistent\n> > from one execution to the next without such a guarantee.\n> \n> I think one of us is forgetting how XOR works.\n\nOops. You're right.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sat, 6 Mar 2021 20:05:44 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n\n>\n> > > SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n> > > SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> > > SELECT BIT_XOR(b) FROM... /* errors out */\n> >\n> >\n> > Why would such an error be necessary, or even desirable?\n>\n> Because there is no way to ensure that the results remain consistent\n> from one execution to the next without such a guarantee.\n>\n\nNumerous existing aggregate functions have this behavior. Making those\nerror isn’t an option. So is making this a special case something we want\nto do (and also maybe make doing so the rule going forward)?\n\nDavid J.\n\nOn Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n> >     SELECT BIT_XOR(b ORDER BY a, c)...        /* works */\n> >     SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> >     SELECT BIT_XOR(b) FROM...                 /* errors out */\n> \n> \n> Why would such an error be necessary, or even desirable?\n\nBecause there is no way to ensure that the results remain consistent\nfrom one execution to the next without such a guarantee.\nNumerous existing aggregate functions have this behavior.  Making those error isn’t an option.  So is making this a special case something we want to do (and also maybe make doing so the rule going forward)?David J.", "msg_date": "Sat, 6 Mar 2021 13:06:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/6/21 9:06 PM, David G. Johnston wrote:\n> On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n> \n>>\n>>>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n>>>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n>>>> SELECT BIT_XOR(b) FROM... /* errors out */\n>>>\n>>>\n>>> Why would such an error be necessary, or even desirable?\n>>\n>> Because there is no way to ensure that the results remain consistent\n>> from one execution to the next without such a guarantee.\n>>\n> \n> Numerous existing aggregate functions have this behavior. Making those\n> error isn’t an option. So is making this a special case something we want\n> to do (and also maybe make doing so the rule going forward)?\n\nAside from the fact that bit_xor() does not need this, I am opposed to\nit in general. It is not our job to make people write correct queries.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 7 Mar 2021 10:36:35 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "ne 7. 3. 2021 v 10:36 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/6/21 9:06 PM, David G. Johnston wrote:\n> > On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n> >\n> >>\n> >>>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n> >>>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> >>>> SELECT BIT_XOR(b) FROM... /* errors out */\n> >>>\n> >>>\n> >>> Why would such an error be necessary, or even desirable?\n> >>\n> >> Because there is no way to ensure that the results remain consistent\n> >> from one execution to the next without such a guarantee.\n> >>\n> >\n> > Numerous existing aggregate functions have this behavior. Making those\n> > error isn’t an option. So is making this a special case something we\n> want\n> > to do (and also maybe make doing so the rule going forward)?\n>\n> Aside from the fact that bit_xor() does not need this, I am opposed to\n> it in general. It is not our job to make people write correct queries.\n>\n\nI cannot agree with the last sentence. It is questions about costs and\nbenefits, but good tool should to make warnings when users does some stupid\nthings.\n\nIt is important at this time, because complexity in IT is pretty high, and\na lot of users are not well trained (but well trained people can make\nerrors too). And a lot of users have zero knowledge about technology, So\nwhen it is possible, and when it makes sense, then Postgres should be\nsimple and safe. I think it is important for renome too. It is about costs\nand benefits. Good reputation is a good benefit for us too. Ordered\naggregation was designed for some purposes, and should be used, when it has\nsense.\n\nRegards\n\nPavel\n\n-- \n> Vik Fearing\n>\n>\n>\n\nne 7. 3. 2021 v 10:36 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 3/6/21 9:06 PM, David G. Johnston wrote:\n> On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n> \n>>\n>>>>     SELECT BIT_XOR(b ORDER BY a, c)...        /* works */\n>>>>     SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n>>>>     SELECT BIT_XOR(b) FROM...                 /* errors out */\n>>>\n>>>\n>>> Why would such an error be necessary, or even desirable?\n>>\n>> Because there is no way to ensure that the results remain consistent\n>> from one execution to the next without such a guarantee.\n>>\n> \n> Numerous existing aggregate functions have this behavior.  Making those\n> error isn’t an option.  So is making this a special case something we want\n> to do (and also maybe make doing so the rule going forward)?\n\nAside from the fact that bit_xor() does not need this, I am opposed to\nit in general.  It is not our job to make people write correct queries.I cannot agree with the last sentence. It is questions about costs and benefits, but good tool should to make warnings when users does some stupid things. It is important at this time, because complexity in IT is pretty high, and a lot of users are not well trained (but well trained people can make errors too). And a lot of users have zero knowledge about technology, So when it is possible, and when it makes sense, then Postgres should be simple and safe. I think it is important for renome too. It is about costs and  benefits. Good reputation is a good benefit for us too. Ordered aggregation was designed for some purposes, and should be used, when it has sense. RegardsPavel\n-- \nVik Fearing", "msg_date": "Sun, 7 Mar 2021 10:53:46 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/7/21 10:53 AM, Pavel Stehule wrote:\n> ne 7. 3. 2021 v 10:36 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 3/6/21 9:06 PM, David G. Johnston wrote:\n>>> On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n>>>\n>>>>\n>>>>>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n>>>>>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n>>>>>> SELECT BIT_XOR(b) FROM... /* errors out */\n>>>>>\n>>>>>\n>>>>> Why would such an error be necessary, or even desirable?\n>>>>\n>>>> Because there is no way to ensure that the results remain consistent\n>>>> from one execution to the next without such a guarantee.\n>>>>\n>>>\n>>> Numerous existing aggregate functions have this behavior. Making those\n>>> error isn’t an option. So is making this a special case something we\n>> want\n>>> to do (and also maybe make doing so the rule going forward)?\n>>\n>> Aside from the fact that bit_xor() does not need this, I am opposed to\n>> it in general. It is not our job to make people write correct queries.\n>>\n> \n> I cannot agree with the last sentence. It is questions about costs and\n> benefits, but good tool should to make warnings when users does some stupid\n> things.\n> \n> It is important at this time, because complexity in IT is pretty high, and\n> a lot of users are not well trained (but well trained people can make\n> errors too). And a lot of users have zero knowledge about technology, So\n> when it is possible, and when it makes sense, then Postgres should be\n> simple and safe. I think it is important for renome too. It is about costs\n> and benefits. Good reputation is a good benefit for us too. Ordered\n> aggregation was designed for some purposes, and should be used, when it has\n> sense.\n\nHow many cycles do you recommend we spend on determining whether ORDER\nBY a, b is sufficient but ORDER BY a is not?\n\nIf we had an optimization_effort_level guc (I have often wanted that),\nthen I agree that this could be added to a very high level. But we\ndon't, so I don't want any of it.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 7 Mar 2021 11:02:46 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "ne 7. 3. 2021 v 11:02 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/7/21 10:53 AM, Pavel Stehule wrote:\n> > ne 7. 3. 2021 v 10:36 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> >> On 3/6/21 9:06 PM, David G. Johnston wrote:\n> >>> On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n> >>>\n> >>>>\n> >>>>>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n> >>>>>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n> >>>>>> SELECT BIT_XOR(b) FROM... /* errors out */\n> >>>>>\n> >>>>>\n> >>>>> Why would such an error be necessary, or even desirable?\n> >>>>\n> >>>> Because there is no way to ensure that the results remain consistent\n> >>>> from one execution to the next without such a guarantee.\n> >>>>\n> >>>\n> >>> Numerous existing aggregate functions have this behavior. Making those\n> >>> error isn’t an option. So is making this a special case something we\n> >> want\n> >>> to do (and also maybe make doing so the rule going forward)?\n> >>\n> >> Aside from the fact that bit_xor() does not need this, I am opposed to\n> >> it in general. It is not our job to make people write correct queries.\n> >>\n> >\n> > I cannot agree with the last sentence. It is questions about costs and\n> > benefits, but good tool should to make warnings when users does some\n> stupid\n> > things.\n> >\n> > It is important at this time, because complexity in IT is pretty high,\n> and\n> > a lot of users are not well trained (but well trained people can make\n> > errors too). And a lot of users have zero knowledge about technology, So\n> > when it is possible, and when it makes sense, then Postgres should be\n> > simple and safe. I think it is important for renome too. It is about\n> costs\n> > and benefits. Good reputation is a good benefit for us too. Ordered\n> > aggregation was designed for some purposes, and should be used, when it\n> has\n> > sense.\n>\n> How many cycles do you recommend we spend on determining whether ORDER\n> BY a, b is sufficient but ORDER BY a is not?\n>\n> If we had an optimization_effort_level guc (I have often wanted that),\n> then I agree that this could be added to a very high level. But we\n> don't, so I don't want any of it.\n>\n\nThe safeguard is mandatory ORDER BY clause.\n\n\n\n-- \n> Vik Fearing\n>\n\nne 7. 3. 2021 v 11:02 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 3/7/21 10:53 AM, Pavel Stehule wrote:\n> ne 7. 3. 2021 v 10:36 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 3/6/21 9:06 PM, David G. Johnston wrote:\n>>> On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n>>>\n>>>>\n>>>>>>     SELECT BIT_XOR(b ORDER BY a, c)...        /* works */\n>>>>>>     SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n>>>>>>     SELECT BIT_XOR(b) FROM...                 /* errors out */\n>>>>>\n>>>>>\n>>>>> Why would such an error be necessary, or even desirable?\n>>>>\n>>>> Because there is no way to ensure that the results remain consistent\n>>>> from one execution to the next without such a guarantee.\n>>>>\n>>>\n>>> Numerous existing aggregate functions have this behavior.  Making those\n>>> error isn’t an option.  So is making this a special case something we\n>> want\n>>> to do (and also maybe make doing so the rule going forward)?\n>>\n>> Aside from the fact that bit_xor() does not need this, I am opposed to\n>> it in general.  It is not our job to make people write correct queries.\n>>\n> \n> I cannot agree with the last sentence. It is questions about costs and\n> benefits, but good tool should to make warnings when users does some stupid\n> things.\n> \n> It is important at this time, because complexity in IT is pretty high, and\n> a lot of users are not well trained (but well trained people can make\n> errors too). And a lot of users have zero knowledge about technology, So\n> when it is possible, and when it makes sense, then Postgres should be\n> simple and safe. I think it is important for renome too. It is about costs\n> and  benefits. Good reputation is a good benefit for us too. Ordered\n> aggregation was designed for some purposes, and should be used, when it has\n> sense.\n\nHow many cycles do you recommend we spend on determining whether  ORDER\nBY a, b  is sufficient but  ORDER BY a  is not?\n\nIf we had an optimization_effort_level guc (I have often wanted that),\nthen I agree that this could be added to a very high level.  But we\ndon't, so I don't want any of it.The safeguard is mandatory ORDER BY clause. \n-- \nVik Fearing", "msg_date": "Sun, 7 Mar 2021 11:05:55 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/7/21 11:05 AM, Pavel Stehule wrote:\n> ne 7. 3. 2021 v 11:02 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 3/7/21 10:53 AM, Pavel Stehule wrote:\n>>> ne 7. 3. 2021 v 10:36 odesílatel Vik Fearing <vik@postgresfriends.org>\n>>> napsal:\n>>>\n>>>> On 3/6/21 9:06 PM, David G. Johnston wrote:\n>>>>> On Saturday, March 6, 2021, David Fetter <david@fetter.org> wrote:\n>>>>>\n>>>>>>\n>>>>>>>> SELECT BIT_XOR(b ORDER BY a, c)... /* works */\n>>>>>>>> SELECT BIT_XOR(b) OVER (ORDER BY a, c)... /* works */\n>>>>>>>> SELECT BIT_XOR(b) FROM... /* errors out */\n>>>>>>>\n>>>>>>>\n>>>>>>> Why would such an error be necessary, or even desirable?\n>>>>>>\n>>>>>> Because there is no way to ensure that the results remain consistent\n>>>>>> from one execution to the next without such a guarantee.\n>>>>>>\n>>>>>\n>>>>> Numerous existing aggregate functions have this behavior. Making those\n>>>>> error isn’t an option. So is making this a special case something we\n>>>> want\n>>>>> to do (and also maybe make doing so the rule going forward)?\n>>>>\n>>>> Aside from the fact that bit_xor() does not need this, I am opposed to\n>>>> it in general. It is not our job to make people write correct queries.\n>>>>\n>>>\n>>> I cannot agree with the last sentence. It is questions about costs and\n>>> benefits, but good tool should to make warnings when users does some\n>> stupid\n>>> things.\n>>>\n>>> It is important at this time, because complexity in IT is pretty high,\n>> and\n>>> a lot of users are not well trained (but well trained people can make\n>>> errors too). And a lot of users have zero knowledge about technology, So\n>>> when it is possible, and when it makes sense, then Postgres should be\n>>> simple and safe. I think it is important for renome too. It is about\n>> costs\n>>> and benefits. Good reputation is a good benefit for us too. Ordered\n>>> aggregation was designed for some purposes, and should be used, when it\n>> has\n>>> sense.\n>>\n>> How many cycles do you recommend we spend on determining whether ORDER\n>> BY a, b is sufficient but ORDER BY a is not?\n>>\n>> If we had an optimization_effort_level guc (I have often wanted that),\n>> then I agree that this could be added to a very high level. But we\n>> don't, so I don't want any of it.\n>>\n> \n> The safeguard is mandatory ORDER BY clause.\n\n\nAnd so you are now mandating an ORDER BY on every query and in every\naggregate and/or window function. Users will not like that at all. I\ncertainly shan't.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 7 Mar 2021 11:13:42 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": ">\n> And so you are now mandating an ORDER BY on every query and in every\n> aggregate and/or window function. Users will not like that at all. I\n> certainly shan't.\n>\n\nThe mandatory ORDER BY clause should be necessary for operations when the\nresult depends on the order. You need an order for calculation of median.\nAnd you don't need to know an order for average. More if the result is one\nnumber and is not possible to do a visual check of correctness (like\nmedian).\n\n-- \n> Vik Fearing\n>\n\n\n\nAnd so you are now mandating an ORDER BY on every query and in every\naggregate and/or window function.  Users will not like that at all.  I\ncertainly shan't.The mandatory ORDER BY clause should be necessary for operations when the result depends on the order. You need an order for calculation of median. And you don't need to know an order for average. More if the result is one number and is not possible to do a visual check of correctness (like median).\n-- \nVik Fearing", "msg_date": "Sun, 7 Mar 2021 11:24:02 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/7/21 11:24 AM, Pavel Stehule wrote:\n>>\n>> And so you are now mandating an ORDER BY on every query and in every\n>> aggregate and/or window function. Users will not like that at all. I\n>> certainly shan't.\n>>\n> \n> The mandatory ORDER BY clause should be necessary for operations when the\n> result depends on the order. You need an order for calculation of median.\n> And you don't need to know an order for average. More if the result is one\n> number and is not possible to do a visual check of correctness (like\n> median).\n\nThe syntax for median (percentile_cont(0.5)) already requires an order\nby clause. You are now requiring one on array_agg().\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 7 Mar 2021 11:28:55 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "ne 7. 3. 2021 v 11:28 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/7/21 11:24 AM, Pavel Stehule wrote:\n> >>\n> >> And so you are now mandating an ORDER BY on every query and in every\n> >> aggregate and/or window function. Users will not like that at all. I\n> >> certainly shan't.\n> >>\n> >\n> > The mandatory ORDER BY clause should be necessary for operations when the\n> > result depends on the order. You need an order for calculation of median.\n> > And you don't need to know an order for average. More if the result is\n> one\n> > number and is not possible to do a visual check of correctness (like\n> > median).\n>\n> The syntax for median (percentile_cont(0.5)) already requires an order\n> by clause. You are now requiring one on array_agg().\n>\n\narray_agg is discuttable, because PostgreSQL arrays are ordered set type.\nBut very common usage is using arrays instead and unordered sets (because\nANSI/SQL sets) are not supported. But anyway - for arrays I can do visual\ncheck if it is ordered well or not.\n\n\n-- \n> Vik Fearing\n>\n\nne 7. 3. 2021 v 11:28 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 3/7/21 11:24 AM, Pavel Stehule wrote:\n>>\n>> And so you are now mandating an ORDER BY on every query and in every\n>> aggregate and/or window function.  Users will not like that at all.  I\n>> certainly shan't.\n>>\n> \n> The mandatory ORDER BY clause should be necessary for operations when the\n> result depends on the order. You need an order for calculation of median.\n> And you don't need to know an order for average. More if the result is one\n> number and is not possible to do a visual check of correctness (like\n> median).\n\nThe syntax for median (percentile_cont(0.5)) already requires an order\nby clause.  You are now requiring one on array_agg().array_agg is discuttable, because PostgreSQL arrays are ordered set type. But very common usage is using arrays instead and unordered sets (because ANSI/SQL sets) are not supported. But anyway - for arrays I can do visual check if it is ordered well or not. \n-- \nVik Fearing", "msg_date": "Sun, 7 Mar 2021 11:37:31 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On 3/7/21 11:37 AM, Pavel Stehule wrote:\n> ne 7. 3. 2021 v 11:28 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 3/7/21 11:24 AM, Pavel Stehule wrote:\n>>>>\n>>>> And so you are now mandating an ORDER BY on every query and in every\n>>>> aggregate and/or window function. Users will not like that at all. I\n>>>> certainly shan't.\n>>>>\n>>>\n>>> The mandatory ORDER BY clause should be necessary for operations when the\n>>> result depends on the order. You need an order for calculation of median.\n>>> And you don't need to know an order for average. More if the result is\n>> one\n>>> number and is not possible to do a visual check of correctness (like\n>>> median).\n>>\n>> The syntax for median (percentile_cont(0.5)) already requires an order\n>> by clause. You are now requiring one on array_agg().\n>>\n> \n> array_agg is discuttable, because PostgreSQL arrays are ordered set type.\n> But very common usage is using arrays instead and unordered sets (because\n> ANSI/SQL sets) are not supported. But anyway - for arrays I can do visual\n> check if it is ordered well or not.\n\nIf by \"visual check\" you mean \"with my human eyeballs\" then I would\nargue that that is always the case and we don't need nannying for other\naggregates either.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 7 Mar 2021 12:39:38 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "ne 7. 3. 2021 v 12:39 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/7/21 11:37 AM, Pavel Stehule wrote:\n> > ne 7. 3. 2021 v 11:28 odesílatel Vik Fearing <vik@postgresfriends.org>\n> > napsal:\n> >\n> >> On 3/7/21 11:24 AM, Pavel Stehule wrote:\n> >>>>\n> >>>> And so you are now mandating an ORDER BY on every query and in every\n> >>>> aggregate and/or window function. Users will not like that at all. I\n> >>>> certainly shan't.\n> >>>>\n> >>>\n> >>> The mandatory ORDER BY clause should be necessary for operations when\n> the\n> >>> result depends on the order. You need an order for calculation of\n> median.\n> >>> And you don't need to know an order for average. More if the result is\n> >> one\n> >>> number and is not possible to do a visual check of correctness (like\n> >>> median).\n> >>\n> >> The syntax for median (percentile_cont(0.5)) already requires an order\n> >> by clause. You are now requiring one on array_agg().\n> >>\n> >\n> > array_agg is discuttable, because PostgreSQL arrays are ordered set type.\n> > But very common usage is using arrays instead and unordered sets (because\n> > ANSI/SQL sets) are not supported. But anyway - for arrays I can do visual\n> > check if it is ordered well or not.\n>\n> If by \"visual check\" you mean \"with my human eyeballs\" then I would\n> argue that that is always the case and we don't need nannying for other\n> aggregates either.\n>\n\nThe correct solution is using arrays like arrays and sets like sets. When\nyou mix two different features to one, then you will have problems.\n\nBut if I see {{1,2,3},{3,4,5}} I have some knowledge - it is not 100%, but\nit is. If I have 27373 as a result of median, I have nothing other\ninformation.\n\nThe design of arrays (in pg) was incremental - it is older than Postgres\nsupported ordered aggregates, and probably older than ANSI/SQL introduced\nsets. So the implementation of strong safeguards is not possible for\ncompatibility reasons. If I designed array_agg or string_agg today, then I\nprefer to design it like ordered aggregates.\n\nSure - it is about life philosophy, and it is about projects where you are,\nand about risks, .. some people prefer risks, some people prefer\nsafeguards. I see a complexity boom as a very big issue - I remember good\nbooks about programming on 50 pagers, and then now we should start from\ngreen or zero again or we have to implement most safeguards that are\npossible to hold systems workable. But anyway - a good system is robust,\nand robust systems try to reduce possible errors how it is possible (human\nerrors are most common).\n\nBut this is offtopic in this discussion :)\n\n\n\n\n\n-- \n> Vik Fearing\n>\n\nne 7. 3. 2021 v 12:39 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 3/7/21 11:37 AM, Pavel Stehule wrote:\n> ne 7. 3. 2021 v 11:28 odesílatel Vik Fearing <vik@postgresfriends.org>\n> napsal:\n> \n>> On 3/7/21 11:24 AM, Pavel Stehule wrote:\n>>>>\n>>>> And so you are now mandating an ORDER BY on every query and in every\n>>>> aggregate and/or window function.  Users will not like that at all.  I\n>>>> certainly shan't.\n>>>>\n>>>\n>>> The mandatory ORDER BY clause should be necessary for operations when the\n>>> result depends on the order. You need an order for calculation of median.\n>>> And you don't need to know an order for average. More if the result is\n>> one\n>>> number and is not possible to do a visual check of correctness (like\n>>> median).\n>>\n>> The syntax for median (percentile_cont(0.5)) already requires an order\n>> by clause.  You are now requiring one on array_agg().\n>>\n> \n> array_agg is discuttable, because PostgreSQL arrays are ordered set type.\n> But very common usage is using arrays instead and unordered sets (because\n> ANSI/SQL sets) are not supported. But anyway - for arrays I can do visual\n> check if it is ordered well or not.\n\nIf by \"visual check\" you mean \"with my human eyeballs\" then I would\nargue that that is always the case and we don't need nannying for other\naggregates either.The correct solution is using arrays like arrays and sets like sets. When you mix two different features to one, then you will have problems.But if I see {{1,2,3},{3,4,5}} I have some knowledge - it is not 100%, but it is. If I have 27373 as a result of median, I have nothing other information. The design of arrays (in pg) was incremental - it is older than Postgres supported ordered aggregates, and probably older than ANSI/SQL introduced sets. So the implementation of strong safeguards is not possible for compatibility reasons. If I designed array_agg or string_agg today, then I prefer to design it like ordered aggregates. Sure - it is about life philosophy, and it is about projects where you are, and about risks, .. some people prefer risks, some people prefer safeguards. I see a complexity boom as a very big issue - I remember good books about programming on 50 pagers, and then now we should start from green or zero again or we have to implement most safeguards that are possible to hold systems workable. But anyway - a good system is robust, and robust systems try to reduce possible errors how it is possible (human errors are most common). But this is offtopic in this discussion :) \n-- \nVik Fearing", "msg_date": "Sun, 7 Mar 2021 13:03:49 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> But this is offtopic in this discussion :)\n\nThe whole topic is off-topic. As a general rule, things that depend on\ninput order shouldn't be declared as aggregates --- they should be window\nfunctions or ordered-set aggregates, for which the syntax forces you to\nspecify input order. All of the standard aggregates, and most of our\ncustom ones (including BIT_XOR) do not depend on input order (... mumble\nfloating-point roundoff error mumble ...), so forcing users to write an\nordering clause would be useless, not to mention being a SQL spec\nviolation.\n\nThere are a small minority like array_agg that do have such a dependency,\nbut as far as I recall our docs for each of those warn about the need to\nsort the input for reproducible results. I think that's sufficient.\nWho's to say whether a particular query actually requires reproducible\nresults? Seeing that we don't provide reproducible row ordering\nwithout an ORDER BY, I'm not sure why we should apply a different\nstandard to array_agg.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Mar 2021 11:31:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" }, { "msg_contents": "On Sun, 7 Mar 2021 at 23:24, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n The mandatory ORDER BY clause should be necessary for operations when\nthe result depends on the order. You need an order for calculation of\nmedian. And you don't need to know an order for average. More if the\nresult is one number and is not possible to do a visual check of\ncorrectness (like median).\n\nI really don't think so.\n\n# create table f (f float not null);\n# insert into f values(1e100),(-1e100),(1.5);\n# select sum(f order by f) from f;\n sum\n-----\n 0\n(1 row)\n\n# select sum(f) from f;\n sum\n-----\n 1.5\n(1 row)\n\nUsers are going to be pretty annoyed with us if we demanded that they\ninclude an ORDER BY for that query. Especially so since our ORDER BY\naggregate implementation still has no planner support.\n\nDavid\n\n\n", "msg_date": "Mon, 8 Mar 2021 10:08:14 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] bit XOR aggregate functions" } ]
[ { "msg_contents": "Hello,\r\n\r\nI'm hoping to gather some early feedback on a heap optimization I've\r\nbeen working on. In short, I'm hoping to add \"partial heap only\r\ntuple\" (PHOT) support, which would allow you to skip updating indexes\r\nfor unchanged columns even when other indexes require updates. Today,\r\nHOT works wonders when no indexed columns are updated. However, as\r\nsoon as you touch one indexed column, you lose that optimization\r\nentirely, as you must update every index on the table. The resulting\r\nperformance impact is a pain point for many of our (AWS's) enterprise\r\ncustomers, so we'd like to lend a hand for some improvements in this\r\narea. For workloads involving a lot of columns and a lot of indexes,\r\nan optimization like PHOT can make a huge difference. I'm aware that\r\nthere was a previous attempt a few years ago to add a similar\r\noptimization called WARM [0] [1]. However, I only noticed this\r\nprevious effort after coming up with the design for PHOT, so I ended\r\nup taking a slightly different approach. I am also aware of a couple\r\nof recent nbtree improvements that may mitigate some of the impact of\r\nnon-HOT updates [2] [3], but I am hoping that PHOT serves as a nice\r\ncomplement to those. I've attached a very early proof-of-concept\r\npatch with the design described below.\r\n\r\nAs far as performance is concerned, it is simple enough to show major\r\nbenefits from PHOT by tacking on a large number of indexes and columns\r\nto a table. For a short pgbench run where each table had 5 additional\r\ntext columns and indexes on every column, I noticed a ~34% bump in\r\nTPS with PHOT [4]. Theoretically, the TPS bump should be even higher\r\nwith additional columns with indexes. In addition to showing such\r\nbenefits, I have also attempted to show that regular pgbench runs are\r\nnot significantly affected. For a short pgbench run with no table\r\nmodifications, I noticed a ~2% bump in TPS with PHOT [5].\r\n\r\nNext, I'll go into the design a bit. I've commandeered the two\r\nremaining bits in t_infomask2 to use as HEAP_PHOT_UPDATED and\r\nHEAP_PHOT_TUPLE. These are analogous to the HEAP_HOT_UPDATED and\r\nHEAP_ONLY_TUPLE bits. (If there are concerns about exhausting the\r\nt_infomask2 bits, I think we could only use one of the remaining bits\r\nas a \"modifier\" bit on the HOT ones. I opted against that for the\r\nproof-of-concept patch to keep things simple.) When creating a PHOT\r\ntuple, we only create new index tuples for updated columns. These new\r\nindex tuples point to the PHOT tuple. Following is a simple\r\ndemonstration with a table with two integer columns, each with its own\r\nindex:\r\n\r\npostgres=# SELECT heap_tuple_infomask_flags(t_infomask, t_infomask2), t_data\r\n FROM heap_page_items(get_raw_page('test', 0))\r\n WHERE t_infomask IS NOT NULL\r\n OR t_infomask2 IS NOT NULL;\r\n heap_tuple_infomask_flags | t_data\r\n-----------------------------------------------------------------------------+--------------------\r\n (\"{HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_PHOT_UPDATED}\",{}) | \\x0000000000000000\r\n (\"{HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_PHOT_UPDATED,HEAP_PHOT_TUPLE}\",{}) | \\x0100000000000000\r\n (\"{HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_PHOT_TUPLE}\",{}) | \\x0100000002000000\r\n(3 rows)\r\n\r\npostgres=# SELECT itemoffset, ctid, data\r\n FROM bt_page_items(get_raw_page('test_a_idx', 1));\r\n itemoffset | ctid | data\r\n------------+-------+-------------------------\r\n 1 | (0,1) | 00 00 00 00 00 00 00 00\r\n 2 | (0,2) | 01 00 00 00 00 00 00 00\r\n(2 rows)\r\n\r\npostgres=# SELECT itemoffset, ctid, data\r\n FROM bt_page_items(get_raw_page('test_b_idx', 1));\r\n itemoffset | ctid | data\r\n------------+-------+-------------------------\r\n 1 | (0,1) | 00 00 00 00 00 00 00 00\r\n 2 | (0,3) | 02 00 00 00 00 00 00 00\r\n(2 rows)\r\n\r\nWhen it is time to scan through a PHOT chain, there are a couple of\r\nthings to account for. Sequential scans work out-of-the-box thanks to\r\nthe visibility rules, but other types of scans like index scans\r\nrequire additional checks. If you encounter a PHOT chain when\r\nperforming an index scan, you should only continue following the chain\r\nas long as none of the columns the index indexes are modified. If the\r\nscan does encounter such a modification, we stop following the chain\r\nand continue with the index scan. Even if there is a tuple in that\r\nPHOT chain that should be returned by our index scan, we will still\r\nfind it, as there will be another matching index tuple that points us\r\nto later in the PHOT chain. My initial idea for determining which\r\ncolumns were modified was to add a new bitmap after the \"nulls\" bitmap\r\nin the tuple header. However, the attached patch simply uses\r\nHeapDetermineModifiedColumns(). I've yet to measure the overhead of\r\nthis approach versus the bitmap approach, but I haven't noticed\r\nanything too detrimental in the testing I've done so far.\r\n\r\nIn my proof-of-concept patch, I've included a temporary hack to get\r\nsome basic bitmap scans working as expected. Since we won't have\r\nfollowed the PHOT chains in the bitmap index scan, we must know how to\r\nfollow them in the bitmap heap scan. Unfortunately, the bitmap heap\r\nscan has no knowledge of what indexed columns to pay attention to when\r\nfollowing the PHOT chains. My temporary hack fixes this by having the\r\nbitmap heap scan pull the set of indexed columns it needs to consider\r\nfrom the outer plan. I think this is one area of the design that will\r\nrequire substantially more effort. Following is a demonstration of a\r\nbasic sequential scan and bitmap scan:\r\n\r\npostgres=# EXPLAIN (COSTS FALSE) SELECT * FROM test;\r\n QUERY PLAN\r\n------------------\r\n Seq Scan on test\r\n(1 row)\r\n\r\npostgres=# SELECT * FROM test;\r\n a | b\r\n---+---\r\n 1 | 2\r\n(1 row)\r\n\r\npostgres=# EXPLAIN (COSTS FALSE) SELECT * FROM test WHERE a >= 0;\r\n QUERY PLAN\r\n---------------------------------------\r\n Bitmap Heap Scan on test\r\n Recheck Cond: (a >= 0)\r\n -> Bitmap Index Scan on test_a_idx\r\n Index Cond: (a >= 0)\r\n(4 rows)\r\n\r\npostgres=# SELECT * FROM test WHERE a >= 0;\r\n a | b\r\n---+---\r\n 1 | 2\r\n(1 row)\r\n\r\nThis design allows for \"weaving\" between HOT and PHOT in a chain.\r\nHowever, it is still important to treat each consecutive set of HOT\r\nupdates or PHOT updates as its own chain for the purposes of pruning\r\nand cleanup. Pruning is heavily restricted for PHOT due to the\r\npresence of corresponding index tuples. I believe we can redirect\r\nline pointers for consecutive sets of PHOT updates that modify the\r\nsame set of indexed columns, but this is only possible if no index has\r\nduplicate values in the redirected set. Also, I do not think it is\r\npossible to prune intermediate line pointers in a PHOT chain. While\r\nit may be possible to redirect all line pointers to the final tuple in\r\na series of updates to the same set of indexed columns, my hunch is\r\nthat doing so will add significant complexity for tracking\r\nintermediate updates, and any performance gains will be marginal.\r\nI've created some small diagrams to illustrate my proposed cleanup\r\nstrategy.\r\n\r\nHere is a hypothetical PHOT chain.\r\n\r\n idx1 0 1 2\r\n idx2 0 1 2\r\n idx3 0\r\n lp 1 2 3 4 5\r\n heap (0,0,0) (1,0,0) (2,0,0) (2,1,0) (2,2,0)\r\n\r\nHeap tuples may be removed and line pointers may be redirected for\r\nconsecutive updates to the same set of indexes (as long as no index\r\nhas duplicate values in the redirected set of updates).\r\n\r\n idx1 0 1 2\r\n idx2 0 1 2\r\n idx3 0\r\n lp 1 2 -> 3 4 -> 5\r\n heap (0,0,0) (2,0,0) (2,2,0)\r\n\r\nWhen following redirect chains, we must check that the \"interesting\"\r\ncolumns for the relevant indexes are not updated whenever a tuple is\r\nfound. In order to be eligible for cleanup, the final tuple in the\r\ncorresponding PHOT/HOT chain must also be eligible for cleanup, or all\r\nindexes must have been updated later in the chain before any visible\r\ntuples. (I suspect that the former condition may cause significant\r\nbloat for some workloads and the latter condition may be prohibitively\r\ncomplicated. Perhaps this can be mitigated by limiting how long we\r\nallow PHOT chains to get.) My proof-of-concept patch does not yet\r\nimplement line pointer redirecting and cleanup, so it is possible that\r\nI am missing some additional obstacles and optimizations here.\r\n\r\nI think PostgreSQL 15 is realistically the earliest target version for\r\nthis change. Given that others find this project worthwhile, that's\r\nmy goal for this patch. I've CC'd a number of folks who have been\r\ninvolved in this project already and who I'm hoping will continue to\r\nhelp me drive this forward.\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/message-id/flat/CABOikdMop5Rb_RnS2xFdAXMZGSqcJ-P-BY2ruMd%2BbuUkJ4iDPw%40mail.gmail.com\r\n[1] https://www.postgresql.org/message-id/flat/CABOikdMNy6yowA%2BwTGK9RVd8iw%2BCzqHeQSGpW7Yka_4RSZ_LOQ%40mail.gmail.com\r\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0d861bbb\r\n[3] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d168b666\r\n[4] non-PHOT:\r\n transaction type: <builtin: TPC-B (sort of)>\r\n scaling factor: 1000\r\n query mode: simple\r\n number of clients: 256\r\n number of threads: 256\r\n duration: 1800 s\r\n number of transactions actually processed: 29759733\r\n latency average = 15.484 ms\r\n latency stddev = 10.102 ms\r\n tps = 16530.552950 (including connections establishing)\r\n tps = 16530.730565 (excluding connections establishing)\r\n\r\n PHOT:\r\n ...\r\n number of transactions actually processed: 39998968\r\n latency average = 11.520 ms\r\n latency stddev = 8.157 ms\r\n tps = 22220.709117 (including connections establishing)\r\n tps = 22221.182648 (excluding connections establishing)\r\n[5] non-PHOT:\r\n ...\r\n number of transactions actually processed: 151841961\r\n latency average = 3.034 ms\r\n latency stddev = 1.854 ms\r\n tps = 84354.268591 (including connections establishing)\r\n tps = 84355.061353 (excluding connections establishing)\r\n\r\n PHOT:\r\n ...\r\n number of transactions actually processed: 155225857\r\n latency average = 2.968 ms\r\n latency stddev = 1.264 ms\r\n tps = 86234.044783 (including connections establishing)\r\n tps = 86234.961286 (excluding connections establishing)", "msg_date": "Tue, 9 Feb 2021 18:48:21 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "partial heap only tuples" }, { "msg_contents": "On Tue, Feb 9, 2021 at 06:48:21PM +0000, Bossart, Nathan wrote:\n> Hello,\n> \n> I'm hoping to gather some early feedback on a heap optimization I've\n> been working on. In short, I'm hoping to add \"partial heap only\n> tuple\" (PHOT) support, which would allow you to skip updating indexes\n> for unchanged columns even when other indexes require updates. Today,\n\nI think it is great you are working on this. I think it is a major way\nto improve performance and I have been disappointed it has not moved\nforward since 2016.\n\n> HOT works wonders when no indexed columns are updated. However, as\n> soon as you touch one indexed column, you lose that optimization\n> entirely, as you must update every index on the table. The resulting\n> performance impact is a pain point for many of our (AWS's) enterprise\n> customers, so we'd like to lend a hand for some improvements in this\n> area. For workloads involving a lot of columns and a lot of indexes,\n> an optimization like PHOT can make a huge difference. I'm aware that\n> there was a previous attempt a few years ago to add a similar\n> optimization called WARM [0] [1]. However, I only noticed this\n> previous effort after coming up with the design for PHOT, so I ended\n> up taking a slightly different approach. I am also aware of a couple\n> of recent nbtree improvements that may mitigate some of the impact of\n> non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice\n> complement to those. I've attached a very early proof-of-concept\n> patch with the design described below.\n\nHow is your approach different from those of [0] and [1]? It is\ninteresting you still see performance benefits even after the btree\nduplication improvements. Did you test with those improvements?\n\n> As far as performance is concerned, it is simple enough to show major\n> benefits from PHOT by tacking on a large number of indexes and columns\n> to a table. For a short pgbench run where each table had 5 additional\n> text columns and indexes on every column, I noticed a ~34% bump in\n> TPS with PHOT [4]. Theoretically, the TPS bump should be even higher\n\nThat's a big improvement.\n\n> Next, I'll go into the design a bit. I've commandeered the two\n> remaining bits in t_infomask2 to use as HEAP_PHOT_UPDATED and\n> HEAP_PHOT_TUPLE. These are analogous to the HEAP_HOT_UPDATED and\n> HEAP_ONLY_TUPLE bits. (If there are concerns about exhausting the\n> t_infomask2 bits, I think we could only use one of the remaining bits\n> as a \"modifier\" bit on the HOT ones. I opted against that for the\n> proof-of-concept patch to keep things simple.) When creating a PHOT\n> tuple, we only create new index tuples for updated columns. These new\n> index tuples point to the PHOT tuple. Following is a simple\n> demonstration with a table with two integer columns, each with its own\n> index:\n\nWhatever solution you have, you have to be able to handle\nadding/removing columns, and adding/removing indexes.\n\n> When it is time to scan through a PHOT chain, there are a couple of\n> things to account for. Sequential scans work out-of-the-box thanks to\n> the visibility rules, but other types of scans like index scans\n> require additional checks. If you encounter a PHOT chain when\n> performing an index scan, you should only continue following the chain\n> as long as none of the columns the index indexes are modified. If the\n> scan does encounter such a modification, we stop following the chain\n> and continue with the index scan. Even if there is a tuple in that\n\nI think in patch [0] and [1], if an index column changes, all the\nindexes had to be inserted into, while you seem to require inserts only\ninto the index that needs it. Is that correct?\n\n> PHOT chain that should be returned by our index scan, we will still\n> find it, as there will be another matching index tuple that points us\n> to later in the PHOT chain. My initial idea for determining which\n> columns were modified was to add a new bitmap after the \"nulls\" bitmap\n> in the tuple header. However, the attached patch simply uses\n> HeapDetermineModifiedColumns(). I've yet to measure the overhead of\n> this approach versus the bitmap approach, but I haven't noticed\n> anything too detrimental in the testing I've done so far.\n\nA bitmap is an interesting approach, but you are right it will need\nbenchmarking.\n\nI wonder if you should create a Postgres wiki page to document all of\nthis. I agree PG 15 makes sense. I would like to help with this if I\ncan. I will need to study this email more later.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 10 Feb 2021 17:43:44 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On 2/10/21, 2:43 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n> On Tue, Feb 9, 2021 at 06:48:21PM +0000, Bossart, Nathan wrote:\r\n>> HOT works wonders when no indexed columns are updated. However, as\r\n>> soon as you touch one indexed column, you lose that optimization\r\n>> entirely, as you must update every index on the table. The resulting\r\n>> performance impact is a pain point for many of our (AWS's) enterprise\r\n>> customers, so we'd like to lend a hand for some improvements in this\r\n>> area. For workloads involving a lot of columns and a lot of indexes,\r\n>> an optimization like PHOT can make a huge difference. I'm aware that\r\n>> there was a previous attempt a few years ago to add a similar\r\n>> optimization called WARM [0] [1]. However, I only noticed this\r\n>> previous effort after coming up with the design for PHOT, so I ended\r\n>> up taking a slightly different approach. I am also aware of a couple\r\n>> of recent nbtree improvements that may mitigate some of the impact of\r\n>> non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice\r\n>> complement to those. I've attached a very early proof-of-concept\r\n>> patch with the design described below.\r\n>\r\n> How is your approach different from those of [0] and [1]? It is\r\n> interesting you still see performance benefits even after the btree\r\n> duplication improvements. Did you test with those improvements?\r\n\r\nI believe one of the main differences is that index tuples will point\r\nto the corresponding PHOT tuple instead of the root of the HOT/PHOT\r\nchain. I'm sure there are other differences. I plan on giving those\r\ntwo long threads another read-through in the near future.\r\n\r\nI made sure that the btree duplication improvements were applied for\r\nmy benchmarking. IIUC those don't alleviate the requirement that you\r\ninsert all index tuples for non-HOT updates, so PHOT can still provide\r\nsome added benefits there.\r\n\r\n>> Next, I'll go into the design a bit. I've commandeered the two\r\n>> remaining bits in t_infomask2 to use as HEAP_PHOT_UPDATED and\r\n>> HEAP_PHOT_TUPLE. These are analogous to the HEAP_HOT_UPDATED and\r\n>> HEAP_ONLY_TUPLE bits. (If there are concerns about exhausting the\r\n>> t_infomask2 bits, I think we could only use one of the remaining bits\r\n>> as a \"modifier\" bit on the HOT ones. I opted against that for the\r\n>> proof-of-concept patch to keep things simple.) When creating a PHOT\r\n>> tuple, we only create new index tuples for updated columns. These new\r\n>> index tuples point to the PHOT tuple. Following is a simple\r\n>> demonstration with a table with two integer columns, each with its own\r\n>> index:\r\n>\r\n> Whatever solution you have, you have to be able to handle\r\n> adding/removing columns, and adding/removing indexes.\r\n\r\nI admittedly have not thought too much about the implications of\r\nadding/removing columns and indexes for PHOT yet, but that's\r\ndefinitely an important part of this project that I need to look into.\r\nI see that HOT has some special handling for commands like CREATE\r\nINDEX that I can reference.\r\n\r\n>> When it is time to scan through a PHOT chain, there are a couple of\r\n>> things to account for. Sequential scans work out-of-the-box thanks to\r\n>> the visibility rules, but other types of scans like index scans\r\n>> require additional checks. If you encounter a PHOT chain when\r\n>> performing an index scan, you should only continue following the chain\r\n>> as long as none of the columns the index indexes are modified. If the\r\n>> scan does encounter such a modification, we stop following the chain\r\n>> and continue with the index scan. Even if there is a tuple in that\r\n>\r\n> I think in patch [0] and [1], if an index column changes, all the\r\n> indexes had to be inserted into, while you seem to require inserts only\r\n> into the index that needs it. Is that correct?\r\n\r\nRight, PHOT only requires new index tuples for the modified columns.\r\nHowever, I was under the impression that WARM aimed to do the same\r\nthing. I might be misunderstanding your question.\r\n\r\n> I wonder if you should create a Postgres wiki page to document all of\r\n> this. I agree PG 15 makes sense. I would like to help with this if I\r\n> can. I will need to study this email more later.\r\n\r\nThanks for taking a look. I think a wiki is a good idea for keeping\r\ntrack of the current state of the design. I'll look into that.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 11 Feb 2021 01:27:16 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "Hi,\n\nOn 2021-02-09 18:48:21 +0000, Bossart, Nathan wrote:\n> In order to be eligible for cleanup, the final tuple in the\n> corresponding PHOT/HOT chain must also be eligible for cleanup, or all\n> indexes must have been updated later in the chain before any visible\n> tuples.\n\nThis sounds like it might be prohibitively painful. Adding effectively\nunremovable bloat to remove other bloat is not an uncomplicated\npremise. I think you'd really need a way to fully remove this as part of\nvacuum for this to be viable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 13 Feb 2021 08:26:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On 2/13/21, 8:26 AM, \"Andres Freund\" <andres@anarazel.de> wrote:\r\n> On 2021-02-09 18:48:21 +0000, Bossart, Nathan wrote:\r\n>> In order to be eligible for cleanup, the final tuple in the\r\n>> corresponding PHOT/HOT chain must also be eligible for cleanup, or all\r\n>> indexes must have been updated later in the chain before any visible\r\n>> tuples.\r\n>\r\n> This sounds like it might be prohibitively painful. Adding effectively\r\n> unremovable bloat to remove other bloat is not an uncomplicated\r\n> premise. I think you'd really need a way to fully remove this as part of\r\n> vacuum for this to be viable.\r\n\r\nYeah, this is something I'm concerned about. I think adding a bitmap\r\nof modified columns to the header of PHOT-updated tuples improves\r\nmatters quite a bit, even for single-page vacuuming. Following is a\r\nstrategy I've been developing (there may still be some gaps). Here's\r\na basic PHOT chain where all tuples are visible and the last one has\r\nnot been deleted or updated:\r\n\r\nidx1 0 1 2 3\r\nidx2 0 1 2\r\nidx3 0 2 3\r\nlp 1 2 3 4 5\r\ntuple (0,0,0) (0,1,1) (2,2,1) (2,2,2) (3,2,3)\r\nbitmap -xx xx- --x x-x\r\n\r\nFor single-page vacuum, we take the following actions:\r\n 1. Starting at the root of the PHOT chain, create an OR'd bitmap\r\n of the chain.\r\n 2. Walk backwards, OR-ing the bitmaps. Stop when the bitmap\r\n matches the one from step 1. As we walk backwards, identify\r\n \"key\" tuples, which are tuples where the OR'd bitmap changes as\r\n you walk backwards. If the OR'd bitmap does not include all\r\n columns for the table, also include the root of the PHOT chain\r\n as a key tuple.\r\n 3. Redirect each key tuple to the next key tuple.\r\n 4. For all but the first key tuple, OR the bitmaps of all pruned\r\n tuples from each key tuple (exclusive) to the next key tuple\r\n (inclusive) and store the result in the bitmap of the next key\r\n tuple.\r\n 5. Mark all line pointers for all non-key tuples as dead. Storage\r\n can be removed for all tuples except the last one, but we must\r\n leave around the bitmap for all key tuples except for the first\r\n one.\r\n\r\nAfter this, our basic PHOT chain looks like this:\r\n\r\nidx1 0 1 2 3\r\nidx2 0 1 2\r\nidx3 0 2 3\r\nlp X X 3->5 X 5\r\ntuple (3,2,3)\r\nbitmap x-x\r\n\r\nWithout PHOT, this intermediate state would have 15 index tuples, 5\r\nline pointers, and 1 heap tuples. With PHOT, we have 10 index tuples,\r\n5 line pointers, 1 heap tuple, and 1 bitmap. When we vacuum the\r\nindexes, we can reclaim the dead line pointers and remove the\r\nassociated index tuples:\r\n\r\nidx1 3\r\nidx2 2\r\nidx3 2 3\r\nlp 3->5 5\r\ntuple (3,2,3)\r\nbitmap x-x\r\n\r\nWithout PHOT, this final state would have 3 index tuples, 1 line\r\npointer, and 1 heap tuple. With PHOT, we have 4 index tuples, 2 line\r\npointers, 1 heap tuple, and 1 bitmap. Overall, we still end up\r\nkeeping around more line pointers and tuple headers (for the bitmaps),\r\nbut maybe that is good enough. I think the next step here would be to\r\nfind a way to remove some of the unnecessary index tuples and adjust\r\nthe remaining ones to point to the last line pointer in the PHOT\r\nchain.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 15 Feb 2021 20:19:40 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On 2/10/21, 2:43 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n> I wonder if you should create a Postgres wiki page to document all of\r\n> this. I agree PG 15 makes sense. I would like to help with this if I\r\n> can. I will need to study this email more later.\r\n\r\nI've started the wiki page for this:\r\n\r\n https://wiki.postgresql.org/wiki/Partial_Heap_Only_Tuples\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 23 Feb 2021 22:22:16 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Wed, Feb 24, 2021 at 3:22 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 2/10/21, 2:43 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\n> > I wonder if you should create a Postgres wiki page to document all of\n> > this. I agree PG 15 makes sense. I would like to help with this if I\n> > can. I will need to study this email more later.\n>\n> I've started the wiki page for this:\n>\n> https://wiki.postgresql.org/wiki/Partial_Heap_Only_Tuples\n>\n> Nathan\n>\n>\nThe regression test case (partial-index) is failing\n\nhttps://cirrus-ci.com/task/5310522716323840\n\n----\n=== ./src/test/isolation/output_iso/regression.diffs ===\ndiff -U3 /tmp/cirrus-ci-build/src/test/isolation/expected/partial-index.out\n/tmp/cirrus-ci-build/src/test/isolation/output_iso/results/partial-index.out\n--- /tmp/cirrus-ci-build/src/test/isolation/expected/partial-index.out\n2021-03-06 23:11:08.018868833 +0000\n+++\n/tmp/cirrus-ci-build/src/test/isolation/output_iso/results/partial-index.out\n2021-03-06 23:26:15.857027075 +0000\n@@ -30,6 +30,8 @@\n6 a 1\n7 a 1\n8 a 1\n+9 a 2\n+10 a 2\nstep c2: COMMIT;\nstarting permutation: rxy1 wx1 wy2 c1 rxy2 c2\n@@ -83,6 +85,7 @@\n6 a 1\n7 a 1\n8 a 1\n+9 a 2\n10 a 1\nstep c1: COMMIT;\n----\n\nCan you please take a look at that?\n\n-- \nIbrar Ahmed\n\nOn Wed, Feb 24, 2021 at 3:22 AM Bossart, Nathan <bossartn@amazon.com> wrote:On 2/10/21, 2:43 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\n> I wonder if you should create a Postgres wiki page to document all of\n> this.  I agree PG 15 makes sense.  I would like to help with this if I\n> can.  I will need to study this email more later.\n\nI've started the wiki page for this:\n\n    https://wiki.postgresql.org/wiki/Partial_Heap_Only_Tuples\n\nNathan\n\nThe regression test case  (partial-index) is failing https://cirrus-ci.com/task/5310522716323840----=== ./src/test/isolation/output_iso/regression.diffs ===diff -U3 /tmp/cirrus-ci-build/src/test/isolation/expected/partial-index.out /tmp/cirrus-ci-build/src/test/isolation/output_iso/results/partial-index.out--- /tmp/cirrus-ci-build/src/test/isolation/expected/partial-index.out 2021-03-06 23:11:08.018868833 +0000+++ /tmp/cirrus-ci-build/src/test/isolation/output_iso/results/partial-index.out 2021-03-06 23:26:15.857027075 +0000@@ -30,6 +30,8 @@6 a 17 a 18 a 1+9 a 2+10 a 2step c2: COMMIT;starting permutation: rxy1 wx1 wy2 c1 rxy2 c2@@ -83,6 +85,7 @@6 a 17 a 18 a 1+9 a 210 a 1step c1: COMMIT;----Can you please take a look at that?-- Ibrar Ahmed", "msg_date": "Mon, 8 Mar 2021 23:14:56 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On 3/8/21, 10:16 AM, \"Ibrar Ahmed\" <ibrar.ahmad@gmail.com> wrote:\r\n> On Wed, Feb 24, 2021 at 3:22 AM Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> On 2/10/21, 2:43 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n>>> I wonder if you should create a Postgres wiki page to document all of\r\n>>> this. I agree PG 15 makes sense. I would like to help with this if I\r\n>>> can. I will need to study this email more later.\r\n>>\r\n>> I've started the wiki page for this:\r\n>>\r\n>> https://wiki.postgresql.org/wiki/Partial_Heap_Only_Tuples\r\n>>\r\n>> Nathan\r\n>\r\n> The regression test case (partial-index) is failing \r\n>\r\n> https://cirrus-ci.com/task/5310522716323840 \r\n\r\nThis patch is intended as a proof-of-concept of some basic pieces of\r\nthe project. I'm working on a new patch set that should be more\r\nsuitable for community review.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 8 Mar 2021 18:38:55 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Mon, Feb 15, 2021 at 08:19:40PM +0000, Bossart, Nathan wrote:\n> Yeah, this is something I'm concerned about. I think adding a bitmap\n> of modified columns to the header of PHOT-updated tuples improves\n> matters quite a bit, even for single-page vacuuming. Following is a\n> strategy I've been developing (there may still be some gaps). Here's\n> a basic PHOT chain where all tuples are visible and the last one has\n> not been deleted or updated:\n> \n> idx1 0 1 2 3\n> idx2 0 1 2\n> idx3 0 2 3\n> lp 1 2 3 4 5\n> tuple (0,0,0) (0,1,1) (2,2,1) (2,2,2) (3,2,3)\n> bitmap -xx xx- --x x-x\n\nFirst, I want to continue encouraging you to work on this because I\nthink it can yield big improvements. Second, I like the wiki you\ncreated. Third, the diagram above seems to be more meaningful if read\nfrom the bottom-up. I suggest you reorder it on the wiki so it can be\nread top-down, maybe:\n\n> lp 1 2 3 4 5\n> tuple (0,0,0) (0,1,1) (2,2,1) (2,2,2) (3,2,3)\n> bitmap -xx xx- --x x-x\n> idx1 0 1 2 3\n> idx2 0 1 2\n> idx3 0 2 3\n\nFourth, I know in the wiki you said create/drop index needs more\nresearch, but I suggest you avoid any design that will be overly complex\nfor create/drop index. For example, a per-row bitmap that is based on\nwhat indexes exist at time of row creation might cause unacceptable\nproblems in handling create/drop index. Would you number indexes? I am\nnot saying you have to solve all the problems now, but you have to keep\nyour eye on obstacles that might block your progress later.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 9 Mar 2021 11:23:42 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On 3/9/21, 8:24 AM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n> On Mon, Feb 15, 2021 at 08:19:40PM +0000, Bossart, Nathan wrote:\r\n>> Yeah, this is something I'm concerned about. I think adding a bitmap\r\n>> of modified columns to the header of PHOT-updated tuples improves\r\n>> matters quite a bit, even for single-page vacuuming. Following is a\r\n>> strategy I've been developing (there may still be some gaps). Here's\r\n>> a basic PHOT chain where all tuples are visible and the last one has\r\n>> not been deleted or updated:\r\n>>\r\n>> idx1 0 1 2 3\r\n>> idx2 0 1 2\r\n>> idx3 0 2 3\r\n>> lp 1 2 3 4 5\r\n>> tuple (0,0,0) (0,1,1) (2,2,1) (2,2,2) (3,2,3)\r\n>> bitmap -xx xx- --x x-x\r\n>\r\n> First, I want to continue encouraging you to work on this because I\r\n> think it can yield big improvements. Second, I like the wiki you\r\n> created. Third, the diagram above seems to be more meaningful if read\r\n> from the bottom-up. I suggest you reorder it on the wiki so it can be\r\n> read top-down, maybe:\r\n>\r\n>> lp 1 2 3 4 5\r\n>> tuple (0,0,0) (0,1,1) (2,2,1) (2,2,2) (3,2,3)\r\n>> bitmap -xx xx- --x x-x\r\n>> idx1 0 1 2 3\r\n>> idx2 0 1 2\r\n>> idx3 0 2 3\r\n\r\nI appreciate the feedback and the words of encouragement. I'll go\r\nahead and flip the diagrams like you suggested. I'm planning on\r\npublishing a larger round of edits to the wiki once the patch set is\r\nready to share. There are a few changes to the design that I've\r\npicked up along the way.\r\n\r\n> Fourth, I know in the wiki you said create/drop index needs more\r\n> research, but I suggest you avoid any design that will be overly complex\r\n> for create/drop index. For example, a per-row bitmap that is based on\r\n> what indexes exist at time of row creation might cause unacceptable\r\n> problems in handling create/drop index. Would you number indexes? I am\r\n> not saying you have to solve all the problems now, but you have to keep\r\n> your eye on obstacles that might block your progress later.\r\n\r\nI am agreed on avoiding an overly complex design. This project\r\nintroduces a certain amount of inherent complexity, so one of my main\r\ngoals is ensuring that it's easy to reason about each piece.\r\n\r\nI'm cautiously optimistic that index creation and deletion will not\r\nrequire too much extra work. For example, if a new index needs to\r\npoint to a partial heap only tuple, it can do so (unlike HOT, which\r\nwould require that the new index point to the root of the chain). The\r\nmodified-columns bitmaps could include the entire set of modified\r\ncolumns (not just the indexed ones), so no additional changes would\r\nneed to be made there. Furthermore, I'm anticipating that the\r\nmodified-columns bitmaps will end up only being used with the\r\nredirected LPs to help reduce heap bloat after single-page vacuuming.\r\nIn that case, new indexes would probably avoid the existing bitmaps\r\nanyway.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 9 Mar 2021 21:33:31 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Tue, Mar 9, 2021 at 09:33:31PM +0000, Bossart, Nathan wrote:\n> I'm cautiously optimistic that index creation and deletion will not\n> require too much extra work. For example, if a new index needs to\n> point to a partial heap only tuple, it can do so (unlike HOT, which\n> would require that the new index point to the root of the chain). The\n> modified-columns bitmaps could include the entire set of modified\n> columns (not just the indexed ones), so no additional changes would\n> need to be made there. Furthermore, I'm anticipating that the\n> modified-columns bitmaps will end up only being used with the\n> redirected LPs to help reduce heap bloat after single-page vacuuming.\n> In that case, new indexes would probably avoid the existing bitmaps\n> anyway.\n\nYes, that would probably work, sure.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 9 Mar 2021 18:48:43 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Tue, Feb 9, 2021 at 10:48 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> I'm hoping to gather some early feedback on a heap optimization I've\n> been working on. In short, I'm hoping to add \"partial heap only\n> tuple\" (PHOT) support, which would allow you to skip updating indexes\n> for unchanged columns even when other indexes require updates. Today,\n> HOT works wonders when no indexed columns are updated. However, as\n> soon as you touch one indexed column, you lose that optimization\n> entirely, as you must update every index on the table. The resulting\n> performance impact is a pain point for many of our (AWS's) enterprise\n> customers, so we'd like to lend a hand for some improvements in this\n> area. For workloads involving a lot of columns and a lot of indexes,\n> an optimization like PHOT can make a huge difference. I'm aware that\n> there was a previous attempt a few years ago to add a similar\n> optimization called WARM [0] [1]. However, I only noticed this\n> previous effort after coming up with the design for PHOT, so I ended\n> up taking a slightly different approach. I am also aware of a couple\n> of recent nbtree improvements that may mitigate some of the impact of\n> non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice\n> complement to those. I've attached a very early proof-of-concept\n> patch with the design described below.\n\nI would like to share some thoughts that I have about how I think\nabout optimizations like PHOT, and how they might fit together with my\nown work -- particularly the nbtree bottom-up index deletion feature\nyou referenced. My remarks could equally well apply to WARM.\nOrdinarily this is the kind of thing that would be too hand-wavey for\nthe mailing list, but we don't have the luxury of in-person\ncommunication right now.\n\nEverybody tends to talk about HOT as if it works perfectly once you\nmake some modest assumptions, such as \"there are no long-running\ntransactions\", and \"no UPDATEs will logically modify indexed columns\".\nBut I tend to doubt that that's truly the case -- I think that there\nare still pathological cases where HOT cannot keep the total table\nsize stable in the long run due to subtle effects that eventually\naggregate into significant issues, like heap fragmentation. Ask Jan\nWieck about the stability of some of the TPC-C/BenchmarkSQL tables to\nget one example of this. There is no reason to believe that PHOT will\nhelp with that. Maybe that's okay, but I would think carefully about\nwhat that means if I were undertaking this work. Ensuring stability in\nthe on-disk size of tables in cases where the size of the logical\ndatabase is stable should be an important goal of a project like PHOT\nor HOT.\n\nIf you want to get a better sense of how these inefficiencies might\nhappen, I suggest looking into using recently added autovacuum logging\nto analyze how well HOT works today, using the technique that I go\ninto here:\n\nhttps://postgr.es/m/CAH2-WzkjU+NiBskZunBDpz6trSe+aQvuPAe+xgM8ZvoB4wQwhA@mail.gmail.com\n\nSmall inefficiencies in the on-disk structure have a tendency to\naggregate over time, at least when there is no possible way to reverse\nthem. The bottom-up index deletion stuff is very effective as a\nbackstop against index bloat, because things are generally very\nnon-linear. The cost of an unnecessary page split is very high, and\npermanent. But we can make it cheap to *try* to avoid that using\nfairly simple heuristics. We can be reasonably confident that we're\nabout to split the page unnecessarily, and use cues that ramp up the\nnumber of heap page accesses as needed. We ramp up during a bottom-up\nindex deletion, as we manage to free some index tuples as a result of\nprevious heap page accesses.\n\nThis works very well because we can intervene very selectively. We\naren't interested in deleting index tuples unless and until we really\nhave to, and in general there tends to be quite a bit of free space to\ntemporarily store extra version duplicates -- that's what most index\npages look like, even on the busiest of databases. It's possible for\nthe bottom-up index deletion mechanism to be invoked very\ninfrequently, and yet make a huge difference. And when it fails to\nfree anything, it fails permanently for that particular leaf page\n(because it splits) -- so now we have lots of space for future index\ntuple insertions that cover the original page's key space. We won't\nthrash.\n\nMy intuition is that similar principles can be applied inside heapam.\nFailing to fit related versions on a heap page (having managed to do\nso for hours or days before that point) is more or less the heap page\nequivalent of a leaf page split from version churn (this is the\npathology that bottom-up index deletion targets). For example, we\ncould have a fall back mode that compresses old versions that is used\nif and only if heap pruning was attempted but then failed. We should\nalways try to avoid migrating to a new heap page, because that amounts\nto a permanent solution to a temporary problem. We should perhaps make\nthe updater work to prove that that's truly necessary, rather than\ngiving up immediately (i.e. assuming that it must be necessary at the\nfirst sign of trouble).\n\nWe might have successfully fit the successor heap tuple version a\nmillion times before just by HOT pruning, and yet currently we give up\njust because it didn't work on the one millionth and first occasion --\ndon't you think that's kind of silly? We may be able to afford having\na fallback strategy that is relatively expensive, provided it is\nrarely used. And it might be very effective in the aggregate, despite\nbeing rarely used -- it might provide us just what we were missing\nbefore. Just try harder when you run into a problem every once in a\nblue moon!\n\nA diversity of strategies with fallback behavior is sometimes the best\nstrategy. Don't underestimate the contribution of rare and seemingly\ninsignificant adverse events. Consider the lifecycle of the data over\ntime. If we quit trying to fit new versions on the same heap page at\nthe first sign of real trouble, then it's only a matter of time until\nwidespread heap fragmentation results -- each heap page only has to be\nunlucky once, and in the long run it's inevitable that they all will.\nWe could probably do better at nipping it in the bud at the level of\nindividual heap pages and opportunistic prune operations.\n\nI'm sure that it would be useful to not have to rely on bottom-up\nindex deletion in more cases -- I think that the idea of \"a better\nHOT\" might still be very helpful. Bottom-up index deletion is only\nsupposed to be a backstop against pathological behavior (version churn\npage splits), which is probably always going to be possible with a\nsufficiently extreme workload. I don't believe that the current levels\nof version churn/write amplification that we still see with Postgres\nmust be addressed through totally eliminating multiple versions of the\nsame logical row that live together in the same heap page. This idea\nis a false dichotomy. And it fails to acknowledge how the current\ndesign often works very well. When and how it fails to work well with\na real workload and real tuning (especially heap fill factor tuning)\nis probably not well understood. Why not start with that?\n\nOur default heap fill factor is 100. Maybe that's the right decision,\nbut it significantly impedes the ability of HOT to keep the size of\ntables stable over time. Just because heap fill factor 90 also has\nissues today doesn't mean that each pathological behavior cannot be\nfixed through targeted intervention. Maybe the myth that HOT works\nperfectly once you make some modest assumptions could come true.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 18 Apr 2021 16:27:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Sun, Apr 18, 2021 at 04:27:15PM -0700, Peter Geoghegan wrote:\n> Everybody tends to talk about HOT as if it works perfectly once you\n> make some modest assumptions, such as \"there are no long-running\n> transactions\", and \"no UPDATEs will logically modify indexed columns\".\n> But I tend to doubt that that's truly the case -- I think that there\n> are still pathological cases where HOT cannot keep the total table\n> size stable in the long run due to subtle effects that eventually\n> aggregate into significant issues, like heap fragmentation. Ask Jan\n> Wieck about the stability of some of the TPC-C/BenchmarkSQL tables to\n\n...\n\n> We might have successfully fit the successor heap tuple version a\n> million times before just by HOT pruning, and yet currently we give up\n> just because it didn't work on the one millionth and first occasion --\n> don't you think that's kind of silly? We may be able to afford having\n> a fallback strategy that is relatively expensive, provided it is\n> rarely used. And it might be very effective in the aggregate, despite\n> being rarely used -- it might provide us just what we were missing\n> before. Just try harder when you run into a problem every once in a\n> blue moon!\n> \n> A diversity of strategies with fallback behavior is sometimes the best\n> strategy. Don't underestimate the contribution of rare and seemingly\n> insignificant adverse events. Consider the lifecycle of the data over\n\nThat is an intersting point --- we often focus on optimizing frequent\noperations, but preventing rare but expensive-in-aggregate events from\nhappening is also useful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 20:09:17 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Mon, Apr 19, 2021 at 5:09 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > A diversity of strategies with fallback behavior is sometimes the best\n> > strategy. Don't underestimate the contribution of rare and seemingly\n> > insignificant adverse events. Consider the lifecycle of the data over\n>\n> That is an intersting point --- we often focus on optimizing frequent\n> operations, but preventing rare but expensive-in-aggregate events from\n> happening is also useful.\n\nRight. Similarly, we sometimes focus on adding an improvement,\noverlooking more promising opportunities to subtract a disimprovement.\nApparently this is a well known tendency:\n\nhttps://www.scientificamerican.com/article/our-brain-typically-overlooks-this-brilliant-problem-solving-strategy/\n\nI believe that it's particularly important to consider subtractive\napproaches with a complex system. This has sometimes worked well for\nme as a conscious and deliberate strategy.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Apr 2021 17:41:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On Tue, Mar 9, 2021 at 12:09 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 3/8/21, 10:16 AM, \"Ibrar Ahmed\" <ibrar.ahmad@gmail.com> wrote:\n> > On Wed, Feb 24, 2021 at 3:22 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n> >> On 2/10/21, 2:43 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\n> >>> I wonder if you should create a Postgres wiki page to document all of\n> >>> this. I agree PG 15 makes sense. I would like to help with this if I\n> >>> can. I will need to study this email more later.\n> >>\n> >> I've started the wiki page for this:\n> >>\n> >> https://wiki.postgresql.org/wiki/Partial_Heap_Only_Tuples\n> >>\n> >> Nathan\n> >\n> > The regression test case (partial-index) is failing\n> >\n> > https://cirrus-ci.com/task/5310522716323840\n>\n> This patch is intended as a proof-of-concept of some basic pieces of\n> the project. I'm working on a new patch set that should be more\n> suitable for community review.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Jul 2021 17:04:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "> On 14 Jul 2021, at 13:34, vignesh C <vignesh21@gmail.com> wrote:\n\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nAs no update has been posted, the patch still doesn't apply. I'm marking this\npatch Returned with Feedback, feel free to open a new entry for an updated\npatch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 11:23:54 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: partial heap only tuples" }, { "msg_contents": "On 11/4/21, 3:24 AM, \"Daniel Gustafsson\" <daniel@yesql.se> wrote:\r\n> As no update has been posted, the patch still doesn't apply. I'm marking this\r\n> patch Returned with Feedback, feel free to open a new entry for an updated\r\n> patch.\r\n\r\nThanks. I have been working on this intermittently, and I hope to\r\npost a more complete proof-of-concept in the near future. I'll create\r\na new commitfest entry once that's done.\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 10 Nov 2021 17:17:44 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: partial heap only tuples" } ]
[ { "msg_contents": "There is a long standing problem with the way that nbtree page\ndeletion places deleted pages in the FSM for recycling: The use of a\n32-bit XID within the deleted page (in the special\narea's/BTPageOpaqueData struct's btpo.xact field) is not robust\nagainst XID wraparound, which can lead to permanently leaking pages in\na variety of scenarios. The problems became worse with the addition of\nthe INDEX_CLEANUP option in Postgres 12 [1]. And, using a 32-bit XID\nin this context creates risk for any further improvements in VACUUM\nthat similarly involve skipping whole indexes. For example, Masahiko\nhas been working on a patch that teaches VACUUM to skip indexes that\nare known to have very little garbage [2].\n\nAttached patch series fixes the issue once and for all. This is\nsomething that I'm targeting for Postgres 14, since it's more or less\na bug fix.\n\nThe first patch teaches nbtree to use 64-bit transaction IDs here, and\nso makes it impossible to leak deleted nbtree pages. This patch is the\nnbtree equivalent of commit 6655a729, which made GiST use 64-bit XIDs\ndue to exactly the same set of problems. The first patch also makes\nthe level field stored in nbtree page's special area/BTPageOpaqueData\nreliably store the level, even in a deleted page. This allows me to\nconsistently use the level field within amcheck, including even within\ndeleted pages.\n\nOf course it will still be possible for the FSM to leak deleted nbtree\nindex pages with the patch -- in general the FSM isn't crash safe.\nThat isn't so bad with the patch, though, because a subsequent VACUUM\nwill eventually notice the really old deleted pages, and add them back\nto the FSM once again. This will always happen because\nVACUUM/_bt_getbuf()/_bt_page_recyclable() can no longer become\nconfused about the age of deleted pages, even when they're really old.\n\nThe second patch in the series adds new information to VACUUM VERBOSE.\nThis makes it easy to understand what's going on here. Index page\ndeletion related output becomes more useful. It might also help with\ndebugging the first patch.\n\nCurrently, VACUUM VERBOSE output for an index that has some page\ndeletions looks like this:\n\n\"38 index pages have been deleted, 38 are currently reusable.\"\n\nWith the second patch applied, we might see this output at the same\npoint in VACUUM VERBOSE output instead:\n\n\"38 index pages have been deleted, 0 are newly deleted, 38 are\ncurrently reusable.\"\n\nThis means that out of the 38 of the pages that were found to be\nmarked deleted in the index, 0 were deleted by the VACUUM operation\nwhose output we see here. That is, there were 0 nbtree pages that were\nnewly marked BTP_DELETED within _bt_unlink_halfdead_page() during\n*this particular* VACUUM -- the VACUUM operation that we see\ninstrumentation about here. It follows that the 38 deleted pages that\nwe encountered must have been marked BTP_DELETED by some previous\nVACUUM operation.\n\nIn practice the \"%u are currently reusable\" output should never\ninclude newly deleted pages, since there is no way that a page marked\nBTP_DELETED can be put in the FSM during the same VACUUM operation --\nthat's unsafe (we need all of this recycling/XID indirection precisely\nbecause we need to delay recycling until it is truly safe, of course).\nNote that the \"%u index pages have been deleted\" output includes both\npages deleted by some previous VACUUM operation, and newly deleted\npages (no change there).\n\nNote that the new \"newly deleted\" output is instrumentation about this\nparticular *VACUUM operation*. In contrast, the other two existing\noutput numbers (\"deleted\" and \"currently reusable\") are actually\ninstrumentation about the state of the *index as a whole* at a point\nin time (barring concurrent recycling of pages counted in VACUUM by\nsome random _bt_getbuf() call in another backend). This fundamental\ndistinction is important here. All 3 numbers/stats that we output can\nhave different values, which can be used to debug the first patch. You\ncan directly observe uncommon cases just from the VERBOSE output, like\nwhen a long running transaction holds up recycling of a deleted page\nthat was actually marked BTP_DELETED in an *earlier* VACUUM operation.\nAnd so if the first patch had any bugs, there'd be a pretty good\nchance that you could observe them using multiple VACUUM VERBOSE\noperations -- you might notice something inconsistent or contradictory\njust by examining the output over time, how things change, etc.\n\n[1] https://postgr.es/m/CA+TgmoYD7Xpr1DWEWWXxiw4-WC1NBJf3Rb9D2QGpVYH9ejz9fA@mail.gmail.com\n[2] https://postgr.es/m/CAH2-WzmkebqPd4MVGuPTOS9bMFvp9MDs5cRTCOsv1rQJ3jCbXw@mail.gmail.com\n--\nPeter Geoghegan", "msg_date": "Tue, 9 Feb 2021 14:14:06 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 9, 2021 at 2:14 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The first patch teaches nbtree to use 64-bit transaction IDs here, and\n> so makes it impossible to leak deleted nbtree pages. This patch is the\n> nbtree equivalent of commit 6655a729, which made GiST use 64-bit XIDs\n> due to exactly the same set of problems.\n\nThere is an unresolved question for my deleted page XID patch: what\nshould it do about the vacuum_cleanup_index_scale_factor feature,\nwhich added an XID to the metapage (its btm_oldest_btpo_xact field). I\nrefer to the work done by commit 857f9c36cda for Postgres 11 by\nMasahiko. It would be good to get your opinion on this as the original\nauthor of that feature, Masahiko.\n\nTo recap, btm_oldest_btpo_xact is supposed to be the oldest XID among\nall deleted pages in the index, so clearly it needs to be carefully\nconsidered in my patch to make the XIDs 64-bit. Even still, v1 of my\npatch from today more or less ignores the issue -- it just gets a\n32-bit version of the oldest value as determined by the oldestBtpoXact\nXID tracking stuff (which is largely unchanged, except that it works\nwith 64-bit Full Transaction Ids now).\n\nObviously it is still possible for the 32-bit btm_oldest_btpo_xact\nfield to wrap around in v1 of my patch. The obvious thing to do here\nis to add a new epoch metapage field, effectively making\nbtm_oldest_btpo_xact 64-bit. However, I don't think that that's a good\nidea. The only reason that we have the btm_oldest_btpo_xact field in\nthe first place is to ameliorate the problem that the patch\ncomprehensively solves! We should stop storing *any* XIDs in the\nmetapage. (Besides, adding a new \"epoch\" field to the metapage would\nbe relatively messy.)\n\nHere is a plan that allows us to stop storing any kind of XID in the\nmetapage in all cases:\n\n1. Stop maintaining the oldest XID among all deleted pages in the\nentire nbtree index during VACUUM. So we can remove all of the\nBTVacState.oldestBtpoXact XID tracking stuff, which is currently\nsomething that even _bt_pagedel() needs special handling for.\n\n2. Stop considering the btm_oldest_btpo_xact metapage field in\n_bt_vacuum_needs_cleanup() -- now the \"Cleanup needed?\" logic only\ncares about maintaining reasonably accurate statistics for the index.\nWhich is really how the vacuum_cleanup_index_scale_factor feature was\nintended to work all along, anyway -- ISTM that the oldestBtpoXact\nstuff was always just an afterthought to paper-over this annoying\n32-bit XID issue.\n\n3. We cannot actually remove the btm_oldest_btpo_xact XID field from\nthe metapage, because of course that would change the BTMetaPageData\nstruct layout, which breaks on-disk compatibility. But why not use it\nfor something useful instead? _bt_update_meta_cleanup_info() can use\nthe same field to store the number of \"newly deleted\" pages from the\nlast btbulkdelete() instead. (See my email from earlier for the\ndefinition of \"newly deleted\".)\n\n4. Now _bt_vacuum_needs_cleanup() can once again consider the\nbtm_oldest_btpo_xact metapage field -- except in a totally different\nway, because now it means something totally different: \"newly deleted\npages during last btbulkdelete() call\" (per item 3). If this # pages\nis very high then we probably should do a full call to btvacuumscan()\n-- _bt_vacuum_needs_cleanup() will return true to make that happen.\n\nIt's unlikely but still possible that a high number of \"newly deleted\npages during the last btbulkdelete() call\" is in itself a good enough\nreason to do a full btvacuumscan() call when the question of calling\nbtvacuumscan() is considered within _bt_vacuum_needs_cleanup(). Item 4\nhere conservatively covers that. Maybe the 32-bit-XID-in-metapage\ntriggering condition had some non-obvious value due to a natural\ntendency for it to limit the number of deleted pages that go\nunrecycled for a long time. (Or maybe there never really was any such\nnatural tendency -- still seems like a good idea to make the change\ndescribed by item 4.)\n\nEven though we are conservative (at least in this sense I just\ndescribed), we nevertheless don't actually care about very old deleted\npages that we have not yet recycled -- provided there are not very\nmany of them. I'm thinking of \"~2% of index\" as the new \"newly deleted\nduring last btbulkdelete() call\" threshold applied within\n_bt_vacuum_needs_cleanup(). There is no good reason why older\ndeleted-but-not-yet-recycled pages should be considered more valuable\nthan any other page that can be used when there is a page split.\n\nObservations about on-disk compatibility with my patch + this 4 point scheme:\n\nA. It doesn't matter that pg_upgrade'd indexes will have an XID value\nin btm_oldest_btpo_xact that now gets incorrectly interpreted as\n\"newly deleted pages during last btbulkdelete() call\" under the 4\npoint scheme I just outlined.\n\nThe spurious value will get cleaned up on the next VACUUM anyway\n(whether VACUUM goes through btbulkdelete() or through\nbtvacuumcleanup()). Besides, most indexes always have a\nbtm_oldest_btpo_xact value of 0.\n\nB. The patch I posted earlier doesn't actually care about the\nBTREE_VERSION of the index at all. And neither does any of the stuff I\njust described for a future v2 of my patch.\n\nAll indexes can use the new format for deleted pages. On-disk\ncompatibility is easy here because the contents of deleted pages only\nneed to work as a tombstone. We can safely assume that old-format\ndeleted pages (pre-Postgres 14 format deleted pages) must be safe to\nrecycle, because the pg_upgrade itself restarts Postgres. There can be\nno backends that have dangling references to the old-format deleted\npage.\n\nC. All supported nbtree versions (all nbtree versions\nBTREE_MIN_VERSION+) get the same benefits under this scheme.\n\nEven BTREE_MIN_VERSION/version 2 indexes are dynamically upgradable to\nBTREE_NOVAC_VERSION/version 3 indexes via a call to\n_bt_upgrademetapage() -- that has been the case since BTREE_VERSION\nwas bumped to BTREE_NOVAC_VERSION/version 3 for Postgres 11's\nvacuum_cleanup_index_scale_factor feature. So all nbtree indexes will\nhave the btm_oldest_btpo_xact metapage field that I now propose to\nreuse to track \"newly deleted pages during last btbulkdelete() call\",\nper point 4.\n\nIn summary: There are no special cases here. No BTREE_VERSION related\ndifficulties. That seems like a huge advantage to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 9 Feb 2021 17:53:14 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 9, 2021 at 5:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Here is a plan that allows us to stop storing any kind of XID in the\n> metapage in all cases:\n\nAttached is v2, which deals with the metapage 32-bit\nXID/btm_oldest_btpo_xact issue using the approach I described earlier.\nWe don't store an XID in the metapage anymore in v2. This seems to\nwork well, as I expected it would.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 9 Feb 2021 23:08:33 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On 10/02/2021 00:14, Peter Geoghegan wrote:\n> There is a long standing problem with the way that nbtree page\n> deletion places deleted pages in the FSM for recycling: The use of a\n> 32-bit XID within the deleted page (in the special\n> area's/BTPageOpaqueData struct's btpo.xact field) is not robust\n> against XID wraparound, which can lead to permanently leaking pages in\n> a variety of scenarios. The problems became worse with the addition of\n> the INDEX_CLEANUP option in Postgres 12 [1]. And, using a 32-bit XID\n> in this context creates risk for any further improvements in VACUUM\n> that similarly involve skipping whole indexes. For example, Masahiko\n> has been working on a patch that teaches VACUUM to skip indexes that\n> are known to have very little garbage [2].\n> \n> Attached patch series fixes the issue once and for all. This is\n> something that I'm targeting for Postgres 14, since it's more or less\n> a bug fix.\n\nThanks for picking this up!\n\n> The first patch teaches nbtree to use 64-bit transaction IDs here, and\n> so makes it impossible to leak deleted nbtree pages. This patch is the\n> nbtree equivalent of commit 6655a729, which made GiST use 64-bit XIDs\n> due to exactly the same set of problems. The first patch also makes\n> the level field stored in nbtree page's special area/BTPageOpaqueData\n> reliably store the level, even in a deleted page. This allows me to\n> consistently use the level field within amcheck, including even within\n> deleted pages.\n\nIs it really worth the trouble to maintain 'level' on deleted pages? All \nyou currently do with it is check that the BTP_LEAF flag is set iff \n\"level == 0\", which seems pointless. I guess there could be some \nforensic value in keeping 'level', but meh.\n\n> The second patch in the series adds new information to VACUUM VERBOSE.\n> This makes it easy to understand what's going on here. Index page\n> deletion related output becomes more useful. It might also help with\n> debugging the first patch.\n> \n> Currently, VACUUM VERBOSE output for an index that has some page\n> deletions looks like this:\n> \n> \"38 index pages have been deleted, 38 are currently reusable.\"\n> \n> With the second patch applied, we might see this output at the same\n> point in VACUUM VERBOSE output instead:\n> \n> \"38 index pages have been deleted, 0 are newly deleted, 38 are\n> currently reusable.\"\n> \n> This means that out of the 38 of the pages that were found to be\n> marked deleted in the index, 0 were deleted by the VACUUM operation\n> whose output we see here. That is, there were 0 nbtree pages that were\n> newly marked BTP_DELETED within _bt_unlink_halfdead_page() during\n> *this particular* VACUUM -- the VACUUM operation that we see\n> instrumentation about here. It follows that the 38 deleted pages that\n> we encountered must have been marked BTP_DELETED by some previous\n> VACUUM operation.\n> \n> In practice the \"%u are currently reusable\" output should never\n> include newly deleted pages, since there is no way that a page marked\n> BTP_DELETED can be put in the FSM during the same VACUUM operation --\n> that's unsafe (we need all of this recycling/XID indirection precisely\n> because we need to delay recycling until it is truly safe, of course).\n> Note that the \"%u index pages have been deleted\" output includes both\n> pages deleted by some previous VACUUM operation, and newly deleted\n> pages (no change there).\n> \n> Note that the new \"newly deleted\" output is instrumentation about this\n> particular *VACUUM operation*. In contrast, the other two existing\n> output numbers (\"deleted\" and \"currently reusable\") are actually\n> instrumentation about the state of the *index as a whole* at a point\n> in time (barring concurrent recycling of pages counted in VACUUM by\n> some random _bt_getbuf() call in another backend). This fundamental\n> distinction is important here. All 3 numbers/stats that we output can\n> have different values, which can be used to debug the first patch. You\n> can directly observe uncommon cases just from the VERBOSE output, like\n> when a long running transaction holds up recycling of a deleted page\n> that was actually marked BTP_DELETED in an *earlier* VACUUM operation.\n> And so if the first patch had any bugs, there'd be a pretty good\n> chance that you could observe them using multiple VACUUM VERBOSE\n> operations -- you might notice something inconsistent or contradictory\n> just by examining the output over time, how things change, etc.\n\nThe full message on master is:\n\nINFO: index \"foo_pkey\" now contains 250001 row versions in 2745 pages\nDETAIL: 250000 index row versions were removed.\n2056 index pages have been deleted, 1370 are currently reusable.\n\nHow about:\n\nINFO: index \"foo_pkey\" now contains 250001 row versions in 2745 pages\nDETAIL: 250000 index row versions and 686 pages were removed.\n2056 index pages are now unused, 1370 are currently reusable.\n\nThe idea is that the first DETAIL line now says what the VACUUM did this \nround, and the last line says what the state of the index is now. One \nconcern with that phrasing is that it might not be clear what \"686 pages \nwere removed\" means. We don't actually shrink the file. Then again, I'm \nnot sure if the \"have been deleted\" was any better in that regard.\n\nIt's still a bit weird that the \"what VACUUM did this round\" information \nis sandwiched between the two other lines that talk about the state of \nthe index after the operation. But I think the language now makes it \nmore clear which is which. Or perhaps flip the INFO and first DETAIL \nlines around like this:\n\nINFO: 250000 index row versions and 686 pages were removed from index \n\"foo_pkey\"\nDETAIL: index now contains 250001 row versions in 2745 pages.\n2056 index pages are now unused, of which 1370 are currently reusable.\n\nFor context, the more full message you get on master is:\n\npostgres=# vacuum verbose foo;\nINFO: vacuuming \"public.foo\"\nINFO: scanned index \"foo_pkey\" to remove 250000 row versions\nDETAIL: CPU: user: 0.16 s, system: 0.00 s, elapsed: 0.16 s\nINFO: \"foo\": removed 250000 row versions in 1107 pages\nDETAIL: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.01 s\nINFO: index \"foo_pkey\" now contains 250001 row versions in 2745 pages\nDETAIL: 250000 index row versions were removed.\n2056 index pages have been deleted, 1370 are currently reusable.\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nINFO: \"foo\": found 250000 removable, 271 nonremovable row versions in \n1108 out of 4425 pages\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 1164\nThere were 87 unused item identifiers.\nSkipped 0 pages due to buffer pins, 2212 frozen pages.\n0 pages are entirely empty.\nCPU: user: 0.27 s, system: 0.00 s, elapsed: 0.28 s.\nVACUUM\n\nThat's pretty confusing, it's a mix of basically progress indicators \n(vacuuming \"public.foo\"), CPU measurements, information about what was \nremoved, and what the state is afterwards. Would be nice to make that \nmore clear overall. But for now, for this particular INFO message, \nperhaps make it more consistent with the lines printed by heapam, like this:\n\nINFO: \"foo_pkey\": removed 250000 index row versions and 686 pages\nDETAIL: index now contains 250001 row versions in 2745 pages.\n2056 index pages are now unused, of which 1370 are currently reusable.\n\n- Heikki\n\n\n", "msg_date": "Wed, 10 Feb 2021 09:58:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Feb 10, 2021 at 10:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Feb 9, 2021 at 2:14 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The first patch teaches nbtree to use 64-bit transaction IDs here, and\n> > so makes it impossible to leak deleted nbtree pages. This patch is the\n> > nbtree equivalent of commit 6655a729, which made GiST use 64-bit XIDs\n> > due to exactly the same set of problems.\n\nThank you for working on this!\n\n>\n> There is an unresolved question for my deleted page XID patch: what\n> should it do about the vacuum_cleanup_index_scale_factor feature,\n> which added an XID to the metapage (its btm_oldest_btpo_xact field). I\n> refer to the work done by commit 857f9c36cda for Postgres 11 by\n> Masahiko. It would be good to get your opinion on this as the original\n> author of that feature, Masahiko.\n>\n> To recap, btm_oldest_btpo_xact is supposed to be the oldest XID among\n> all deleted pages in the index, so clearly it needs to be carefully\n> considered in my patch to make the XIDs 64-bit. Even still, v1 of my\n> patch from today more or less ignores the issue -- it just gets a\n> 32-bit version of the oldest value as determined by the oldestBtpoXact\n> XID tracking stuff (which is largely unchanged, except that it works\n> with 64-bit Full Transaction Ids now).\n>\n> Obviously it is still possible for the 32-bit btm_oldest_btpo_xact\n> field to wrap around in v1 of my patch. The obvious thing to do here\n> is to add a new epoch metapage field, effectively making\n> btm_oldest_btpo_xact 64-bit. However, I don't think that that's a good\n> idea. The only reason that we have the btm_oldest_btpo_xact field in\n> the first place is to ameliorate the problem that the patch\n> comprehensively solves! We should stop storing *any* XIDs in the\n> metapage. (Besides, adding a new \"epoch\" field to the metapage would\n> be relatively messy.)\n\nI agree that btm_oldest_btpo_xact will no longer be necessary in terms\nof recycling deleted pages.\n\nThe purpose of btm_oldest_btpo_xact is to prevent deleted pages from\nbeing leaked. As you mentioned, it has the oldest btpo.xact in\nBTPageOpaqueData among all deleted pages in the index. Looking back to\nthe time when we develop INDEX_CLEANUP option if we skip index cleanup\n(meaning both ambulkdelete and amvaucumcleanup), there was a problem\nin btree indexes that deleted pages could never be recycled if XID\nwraps around. So the idea behind btm_oldest_btpo_xact is, we remember\nthe oldest btpo.xact among the all deleted pages and do btvacuumscan()\nif this value is older than global xmin (meaning there is at least one\nrecyclable page). That way, we can recycle the deleted pages without\nleaking the pages (of course, unless INDEX_CLEANUP is disabled).\n\nGiven that we can guarantee that deleted pages never be leaked by\nusing 64-bit XID, I also think we don't need this value. We can do\namvacuumcleanup only if the table receives enough insertions to update\nthe statistics (i.g., vacuum_cleanup_index_scale_factor check). I\nthink this is a more desirable behavior. Not skipping amvacuumcleanup\nif there is even one deleted page that we can recycle is very\nconservative.\n\nConsidering your idea of keeping newly deleted pages in the meta page,\nI can see a little value that keeping btm_oldest_btpo_xact and making\nit 64-bit XID. I described below.\n\n>\n> Here is a plan that allows us to stop storing any kind of XID in the\n> metapage in all cases:\n>\n> 1. Stop maintaining the oldest XID among all deleted pages in the\n> entire nbtree index during VACUUM. So we can remove all of the\n> BTVacState.oldestBtpoXact XID tracking stuff, which is currently\n> something that even _bt_pagedel() needs special handling for.\n>\n> 2. Stop considering the btm_oldest_btpo_xact metapage field in\n> _bt_vacuum_needs_cleanup() -- now the \"Cleanup needed?\" logic only\n> cares about maintaining reasonably accurate statistics for the index.\n> Which is really how the vacuum_cleanup_index_scale_factor feature was\n> intended to work all along, anyway -- ISTM that the oldestBtpoXact\n> stuff was always just an afterthought to paper-over this annoying\n> 32-bit XID issue.\n>\n> 3. We cannot actually remove the btm_oldest_btpo_xact XID field from\n> the metapage, because of course that would change the BTMetaPageData\n> struct layout, which breaks on-disk compatibility. But why not use it\n> for something useful instead? _bt_update_meta_cleanup_info() can use\n> the same field to store the number of \"newly deleted\" pages from the\n> last btbulkdelete() instead. (See my email from earlier for the\n> definition of \"newly deleted\".)\n>\n> 4. Now _bt_vacuum_needs_cleanup() can once again consider the\n> btm_oldest_btpo_xact metapage field -- except in a totally different\n> way, because now it means something totally different: \"newly deleted\n> pages during last btbulkdelete() call\" (per item 3). If this # pages\n> is very high then we probably should do a full call to btvacuumscan()\n> -- _bt_vacuum_needs_cleanup() will return true to make that happen.\n>\n> It's unlikely but still possible that a high number of \"newly deleted\n> pages during the last btbulkdelete() call\" is in itself a good enough\n> reason to do a full btvacuumscan() call when the question of calling\n> btvacuumscan() is considered within _bt_vacuum_needs_cleanup(). Item 4\n> here conservatively covers that. Maybe the 32-bit-XID-in-metapage\n> triggering condition had some non-obvious value due to a natural\n> tendency for it to limit the number of deleted pages that go\n> unrecycled for a long time. (Or maybe there never really was any such\n> natural tendency -- still seems like a good idea to make the change\n> described by item 4.)\n>\n> Even though we are conservative (at least in this sense I just\n> described), we nevertheless don't actually care about very old deleted\n> pages that we have not yet recycled -- provided there are not very\n> many of them. I'm thinking of \"~2% of index\" as the new \"newly deleted\n> during last btbulkdelete() call\" threshold applied within\n> _bt_vacuum_needs_cleanup(). There is no good reason why older\n> deleted-but-not-yet-recycled pages should be considered more valuable\n> than any other page that can be used when there is a page split.\n\nInteresting.\n\nI like this idea that triggers btvacuumscan() if there are many newly\ndeleted pages. I think this would be helpful especially for the case\nof bulk-deletion on the table. But why we use the number of *newly*\ndeleted pages but not the total number of deleted pages in the index?\nIIUC if several btbulkdelete executions deleted index pages less than\n2% of the index and those deleted pages could not be recycled yet,\nthen the number of recyclable pages would exceed 2% of the index in\ntotal but amvacuumcleanup() would not trigger btvacuumscan() because\nthe last newly deleted pages are less than the 2% threshold. I might\nbe missing something though.\n\nAlso, we need to note that having newly deleted pages doesn't\nnecessarily mean these always are recyclable at that time. If the\nglobal xmin is still older than deleted page's btpo.xact values, we\nstill could not recycle them. I think btm_oldest_btpo_xact probably\nwill help this case. That is, we store the oldest btpo.xact among\nthose newly deleted pages to btm_oldest_btpo_xact and we trigger\nbtvacuumscan() if there are many newly deleted pages (more than 2% of\nindex) and the btm_oldest_btpo_xact is older than the global xmin (I\nsuppose each newly deleted pages could have different btpo.xact).\n\n>\n> Observations about on-disk compatibility with my patch + this 4 point scheme:\n>\n> A. It doesn't matter that pg_upgrade'd indexes will have an XID value\n> in btm_oldest_btpo_xact that now gets incorrectly interpreted as\n> \"newly deleted pages during last btbulkdelete() call\" under the 4\n> point scheme I just outlined.\n>\n> The spurious value will get cleaned up on the next VACUUM anyway\n> (whether VACUUM goes through btbulkdelete() or through\n> btvacuumcleanup()). Besides, most indexes always have a\n> btm_oldest_btpo_xact value of 0.\n>\n> B. The patch I posted earlier doesn't actually care about the\n> BTREE_VERSION of the index at all. And neither does any of the stuff I\n> just described for a future v2 of my patch.\n>\n> All indexes can use the new format for deleted pages. On-disk\n> compatibility is easy here because the contents of deleted pages only\n> need to work as a tombstone. We can safely assume that old-format\n> deleted pages (pre-Postgres 14 format deleted pages) must be safe to\n> recycle, because the pg_upgrade itself restarts Postgres. There can be\n> no backends that have dangling references to the old-format deleted\n> page.\n>\n> C. All supported nbtree versions (all nbtree versions\n> BTREE_MIN_VERSION+) get the same benefits under this scheme.\n>\n> Even BTREE_MIN_VERSION/version 2 indexes are dynamically upgradable to\n> BTREE_NOVAC_VERSION/version 3 indexes via a call to\n> _bt_upgrademetapage() -- that has been the case since BTREE_VERSION\n> was bumped to BTREE_NOVAC_VERSION/version 3 for Postgres 11's\n> vacuum_cleanup_index_scale_factor feature. So all nbtree indexes will\n> have the btm_oldest_btpo_xact metapage field that I now propose to\n> reuse to track \"newly deleted pages during last btbulkdelete() call\",\n> per point 4.\n>\n> In summary: There are no special cases here. No BTREE_VERSION related\n> difficulties. That seems like a huge advantage to me.\n\nGreat! I'll look at the v2 patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 10 Feb 2021 19:19:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 9, 2021 at 11:58 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Thanks for picking this up!\n\nI actually had a patch for this in 2019, albeit one that remained in\nrough shape until recently. Must have forgotten about it.\n\n> Is it really worth the trouble to maintain 'level' on deleted pages? All\n> you currently do with it is check that the BTP_LEAF flag is set iff\n> \"level == 0\", which seems pointless. I guess there could be some\n> forensic value in keeping 'level', but meh.\n\nWhat trouble is that? The only way in which it's inconvenient is that\nwe have to include the level field in xl_btree_unlink_page WAL records\nfor the first time. The structure of the relevant REDO routine (which\nis called btree_xlog_unlink_page()) ought to explicitly recreate the\noriginal page from scratch, without any special cases. This makes it\npossible to pretend that there never was such a thing as an nbtree\npage whose level field could not be relied on. I personally think that\nit's simpler when seen in the wider context of how the code works and\nis verified.\n\nBesides, there is also amcheck to consider. I am a big believer in\namcheck, and see it as something that has enabled my work on the\nB-Tree code over the past few years. Preserving the level field in\ndeleted pages increases our coverage just a little, and practically\neliminates cases where we cannot rely on the level field.\n\nOf course it's still true that this detail (the deleted pages level\nfield question) will probably never seem important to anybody else. To\nme it's one small detail of a broader strategy. No one detail of that\nbroader strategy, taken in isolation, will ever be crucially\nimportant.\n\nOf course it's also true that we should not assume that a very high\ncost in performance/code/whatever can justify a much smaller benefit\nin amcheck. But you haven't really explained why the cost seems\nunacceptable to you. (Perhaps I missed something.)\n\n> How about:\n>\n> INFO: index \"foo_pkey\" now contains 250001 row versions in 2745 pages\n> DETAIL: 250000 index row versions and 686 pages were removed.\n> 2056 index pages are now unused, 1370 are currently reusable.\n>\n> The idea is that the first DETAIL line now says what the VACUUM did this\n> round, and the last line says what the state of the index is now. One\n> concern with that phrasing is that it might not be clear what \"686 pages\n> were removed\" means.\n\n> It's still a bit weird that the \"what VACUUM did this round\" information\n> is sandwiched between the two other lines that talk about the state of\n> the index after the operation. But I think the language now makes it\n> more clear which is which.\n\nIMV our immediate goal for the new VACUUM VERBOSE output should be to\nmake the output as accurate and descriptive as possible (while still\nusing terminology that works for all index AMs, not just nbtree). I\ndon't think that we should give too much weight to making the\ninformation easy to understand in isolation. Because that's basically\nimpossible -- it just doesn't work that way IME.\n\nConfusion over the accounting of \"deleted pages in indexes\" vs \"pages\ndeleted by this VACUUM\" is not new. See my bugfix commit 73a076b0 to\nsee one vintage example. The relevant output of VACUUM VERBOSE\nproduced inconsistent results for perhaps as long as 15 years before I\nnoticed it and fixed it. I somehow didn't notice this despite using it\nfor various tests for my own B-Tree projects a year or two before the\nfix. Tests that produced inconsistent results that I noticed pretty\nearly on, and yet assumed were all down to some subtlety that I didn't\nyet understand.\n\nMy point is this: I am quite prepared to admit that these details\nreally are complicated. But that's not incidental to what's really\ngoing on, or anything (though I agree with your later remarks on the\ngeneral tidiness of VACUUM VERBOSE -- it is a real dog's dinner).\n\nI'm not saying that we should assume that no DBA will find the\nrelevant VACUUM VERBOSE output useful -- I don't think that at all. It\nwill be kind of rare for a user to really comb through it. But that's\nmostly because big problems in this area are themselves kind of rare\n(most individual indexes never have any deleted pages IME).\n\nAny DBA consuming this output sensibly will consume it in a way that\nmakes sense in the *context of the problem that they're experiencing*,\nwhatever that might mean for them. They'll consider how it changes\nover time for the same index. They'll try to correlate it with other\nsymptoms, or other problems, and make sense of it in a top-down\nfashion. We should try to make it as descriptive as possible so that\nDBAs will have the breadcrumbs they need to tie it back to whatever\nthe core issue happens to be -- maybe they'll have to read the source\ncode to get to the bottom of it. It's likely to be some rare issue in\nthose cases where the DBA really cares about the details -- it's\nlikely to be workload dependent.\n\nGood DBAs spend much of their time on exceptional problems -- all the\neasy problems will have been automated away already. Things like wait\nevents are popular with DBAs for this reason.\n\n> Or perhaps flip the INFO and first DETAIL\n> lines around like this:\n\n> INFO: 250000 index row versions and 686 pages were removed from index\n> \"foo_pkey\"\n> DETAIL: index now contains 250001 row versions in 2745 pages.\n> 2056 index pages are now unused, of which 1370 are currently reusable.\n>\n> For context, the more full message you get on master is:\n\n> That's pretty confusing, it's a mix of basically progress indicators\n> (vacuuming \"public.foo\"), CPU measurements, information about what was\n> removed, and what the state is afterwards.\n\nI agree that the output of VACUUM VERBOSE is messy. It's probably a\nbunch of accretions that made sense in isolation, but added up to a\nbig mess over time. So I agree: now would be a good time to do\nsomething about that.\n\nIt would also be nice to find a way to get this information in the\nlogs when log_autovacuum is enabled (perhaps only when the verbosity\nis increased). I've discussed this with Masahiko in the context of his\nrecent work, actually. Even before we started talking about the XID\npage deletion problem that I'm fixing here.\n\n> INFO: \"foo_pkey\": removed 250000 index row versions and 686 pages\n> DETAIL: index now contains 250001 row versions in 2745 pages.\n> 2056 index pages are now unused, of which 1370 are currently reusable.\n\nI can see what you mean here, and maybe we should do roughly what\nyou've outlined. Still, we should use terminology that isn't too far\nremoved from what actually happens in nbtree. What's a \"removed\" page?\nThe distinction between all of the different kinds of index pages that\nmight be involved here is just subtle. Again, better to use a precise,\ndescriptive term that nobody fully understands -- because hardly\nanybody will fully understand it anyway (even including advanced users\nthat go on to find the VACUUM VERBOSE output very useful for whatever\nreason).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 10 Feb 2021 17:39:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Feb 10, 2021 at 2:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Thank you for working on this!\n\nI'm glad that I finally found time for it! It seems like it'll make\nthings easier elsewhere.\n\nAttached is v3 of the index. I'll describe the changes I made in more\ndetail in my response to your points below.\n\n> I agree that btm_oldest_btpo_xact will no longer be necessary in terms\n> of recycling deleted pages.\n\nCool.\n\n> Given that we can guarantee that deleted pages never be leaked by\n> using 64-bit XID, I also think we don't need this value. We can do\n> amvacuumcleanup only if the table receives enough insertions to update\n> the statistics (i.g., vacuum_cleanup_index_scale_factor check). I\n> think this is a more desirable behavior. Not skipping amvacuumcleanup\n> if there is even one deleted page that we can recycle is very\n> conservative.\n>\n> Considering your idea of keeping newly deleted pages in the meta page,\n> I can see a little value that keeping btm_oldest_btpo_xact and making\n> it 64-bit XID. I described below.\n\n> Interesting.\n>\n> I like this idea that triggers btvacuumscan() if there are many newly\n> deleted pages. I think this would be helpful especially for the case\n> of bulk-deletion on the table. But why we use the number of *newly*\n> deleted pages but not the total number of deleted pages in the index?\n\nI was unclear here -- I should not have said \"newly deleted\" pages at\nall. What I actually do when calling _bt_vacuum_needs_cleanup() is\nthis (from v3, at the end of btvacuumscan()):\n\n- _bt_update_meta_cleanup_info(rel, vstate.oldestBtpoXact,\n+ Assert(stats->pages_deleted >= stats->pages_free);\n+ pages_deleted_not_free = stats->pages_deleted - stats->pages_free;\n+ _bt_update_meta_cleanup_info(rel, pages_deleted_not_free,\n info->num_heap_tuples);\n\nWe're actually passing something I have called\n\"pages_deleted_not_free\" here, which is derived from the bulk delete\nstats in the obvious way that you see here (subtraction). I'm not\nusing pages_newly_deleted at all now. Note also that the behavior\ninside _bt_update_meta_cleanup_info() no longer varies based on\nwhether it is called during btvacuumcleanup() or during btbulkdelete()\n-- the same rules apply either way. We want to store\npages_deleted_not_free in the metapage at the end of btvacuumscan(),\nno matter what.\n\nThis same pages_deleted_not_free information is now used by\n_bt_vacuum_needs_cleanup() in an obvious and simple way: if it's too\nhigh (over 2.5%), then that will trigger a call to btbulkdelete() (we\nwon't skip scanning the index). Though in practice it probably won't\ncome up that often -- there just aren't ever that many deleted pages\nin most indexes.\n\n> IIUC if several btbulkdelete executions deleted index pages less than\n> 2% of the index and those deleted pages could not be recycled yet,\n> then the number of recyclable pages would exceed 2% of the index in\n> total but amvacuumcleanup() would not trigger btvacuumscan() because\n> the last newly deleted pages are less than the 2% threshold. I might\n> be missing something though.\n\nI think you're right -- my idea of varying the behavior of\n_bt_update_meta_cleanup_info() based on whether it's being called\nduring btvacuumcleanup() or during btbulkdelete() was a bad idea (FWIW\nhalf the problem was that I explained the idea badly to begin with).\nBut, as I said, it's fixed in v3: we simply pass\n\"pages_deleted_not_free\" as an argument to _bt_vacuum_needs_cleanup()\nnow.\n\nDoes that make sense? Does it address this concern?\n\n> Also, we need to note that having newly deleted pages doesn't\n> necessarily mean these always are recyclable at that time. If the\n> global xmin is still older than deleted page's btpo.xact values, we\n> still could not recycle them. I think btm_oldest_btpo_xact probably\n> will help this case. That is, we store the oldest btpo.xact among\n> those newly deleted pages to btm_oldest_btpo_xact and we trigger\n> btvacuumscan() if there are many newly deleted pages (more than 2% of\n> index) and the btm_oldest_btpo_xact is older than the global xmin (I\n> suppose each newly deleted pages could have different btpo.xact).\n\nI agree that having no XID in the metapage creates a new small\nproblem. Specifically, there are certain narrow cases that can cause\nconfusion in _bt_vacuum_needs_cleanup(). These cases didn't really\nexist before my patch (kind of).\n\nThe simplest example is easy to run into when debugging the patch on\nyour laptop. Because you're using your personal laptop, and not a real\nproduction server, there will be no concurrent sessions that might\nconsume XIDs. You can run VACUUM VERBOSE manually several times, but\nthat alone will never be enough to enable VACUUM to recycle any of the\npages that the first VACUUM manages to delete (many to mark deleted,\nreporting the pages as \"newly deleted\" via the new instrumentation\nfrom the second patch). Note that the master branch is *also* unable\nto recycle these deleted pages, simply because the \"safe xid\" never\ngets old because there are no newly allocated XIDs to make it look old\n(there are no allocated XIDs just because nothing else happens). That\nin itself is not the new problem.\n\nThe new problem is that _bt_vacuum_needs_cleanup() will no longer\nnotice that the oldest XID among deleted-but-not-yet-recycled pages is\nso old that it will not be able to recycle the pages anyway -- at\nleast not the oldest page, though in this specific case that will\napply to all deleted pages equally. We might as well not bother trying\nyet, which the old code \"gets right\" -- but it doesn't get it right\nfor any good reason. That is, the old code won't have VACUUM scan the\nindex at all, so it \"wins\" in this specific scenario.\n\nI think that's okay, though -- it's not a real problem, and actually\nmakes sense and has other advantages. This is why I believe it's okay:\n\n* We really should never VACUUM the same table before even one or two\nXIDs are allocated -- that's what happens in the simple laptop test\nscenario that I described. Surely we should not be too concerned about\n\"doing the right thing\" under this totally artificial set of\nconditions.\n\n(BTW, I've been using txid_current() for my own \"laptop testing\", as a\nway to work around this issue.)\n\n* More generally, if you really can't do recycling of pages that you\ndeleted during the last VACUUM during this VACUUM (perhaps because of\nthe presence of a long-running xact that holds open a snapshot), then\nyou have lots of *huge* problems already, and this is the least of\nyour concerns. Besides, at that point an affected VACUUM will be doing\nwork for an affected index through a btbulkdelete() call, so the\nbehavior of _bt_vacuum_needs_cleanup() becomes irrelevant.\n\n* As you pointed out already, the oldest XID/deleted page from the\nindex may be significantly older than the newest. Why should we bucket\nthem together?\n\nWe could easily have a case where most of the deleted pages can be\nrecycled -- even when all indexes were originally marked deleted by\nthe same VACUUM operation. If there are lots of pages that actually\ncan be recycled, it is probably a bad thing to assume that the oldest\nXID is representative of all of them. After all, with the patch we\nonly go out of our way to recycle deleted pages when we are almost\nsure that the total number of recyclable pages (pages marked deleted\nduring a previous VACUUM) exceeds 2.5% of the total size of the index.\nThat broad constraint is important here -- if we do nothing unless\nthere are lots of deleted pages anyway, we are highly unlikely to ever\nerr on the side of being too eager (not eager enough seems more likely\nto me).\n\nI think that we're justified in making a general assumption inside\n_bt_vacuum_needs_cleanup() (which is documented at the point that we\ncall it, inside btvacuumscan()): The assumption that however many\nindex pages the metapage says we'll be able to recycle (whatever the\nfield says) will in fact turn out to be recyclable if we decide that\nwe need to. There are specific cases where that will be kind of wrong,\nas I've gone into, but the assumption/design has many more advantages\nthan disadvantages.\n\nI have tried to capture this in v3 of the patch. Can you take a look?\nSee the new comments inside _bt_vacuum_needs_cleanup(). Plus the\ncomments when we call it inside btvacuumscan().\n\nDo you think that those new comments are helpful? Does this address\nyour concern?\n\nThanks\n-- \nPeter Geoghegan", "msg_date": "Wed, 10 Feb 2021 19:10:34 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Feb 10, 2021 at 7:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v3 of the index. I'll describe the changes I made in more\n> detail in my response to your points below.\n\nI forget to mention that v3 adds several assertions like this one:\n\nAssert(!_bt_page_recyclable(BufferGetPage(buf)));\n\nThese appear at a few key points inside generic routines like\n_bt_getbuf(). The overall effect is that every nbtree buffer access\n(with the exception of buffer accesses by VACUUM) will make sure that\nthe page that they're about to access is not recyclable (a page that\nan index scan lands on might be half-dead or deleted, but it had\nbetter not be recyclable).\n\nThis can probably catch problems with recycling pages too early, such\nas the problem fixed by commit d3abbbeb back in 2012. Any similar bugs\nin this area that may appear in the future can be expected to be very\nsubtle, for a few reasons. For one, a page can be recyclable but not\nyet entered into the FSM by VACUUM for a long time. (I could go on.)\n\nThe assertions dramatically improve our chances of catching problems\nlike that early.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 10 Feb 2021 19:50:40 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Thu, Feb 11, 2021 at 12:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Feb 10, 2021 at 2:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Thank you for working on this!\n>\n> I'm glad that I finally found time for it! It seems like it'll make\n> things easier elsewhere.\n>\n> Attached is v3 of the index. I'll describe the changes I made in more\n> detail in my response to your points below.\n>\n> > I agree that btm_oldest_btpo_xact will no longer be necessary in terms\n> > of recycling deleted pages.\n>\n> Cool.\n>\n> > Given that we can guarantee that deleted pages never be leaked by\n> > using 64-bit XID, I also think we don't need this value. We can do\n> > amvacuumcleanup only if the table receives enough insertions to update\n> > the statistics (i.g., vacuum_cleanup_index_scale_factor check). I\n> > think this is a more desirable behavior. Not skipping amvacuumcleanup\n> > if there is even one deleted page that we can recycle is very\n> > conservative.\n> >\n> > Considering your idea of keeping newly deleted pages in the meta page,\n> > I can see a little value that keeping btm_oldest_btpo_xact and making\n> > it 64-bit XID. I described below.\n>\n> > Interesting.\n> >\n> > I like this idea that triggers btvacuumscan() if there are many newly\n> > deleted pages. I think this would be helpful especially for the case\n> > of bulk-deletion on the table. But why we use the number of *newly*\n> > deleted pages but not the total number of deleted pages in the index?\n>\n> I was unclear here -- I should not have said \"newly deleted\" pages at\n> all. What I actually do when calling _bt_vacuum_needs_cleanup() is\n> this (from v3, at the end of btvacuumscan()):\n>\n> - _bt_update_meta_cleanup_info(rel, vstate.oldestBtpoXact,\n> + Assert(stats->pages_deleted >= stats->pages_free);\n> + pages_deleted_not_free = stats->pages_deleted - stats->pages_free;\n> + _bt_update_meta_cleanup_info(rel, pages_deleted_not_free,\n> info->num_heap_tuples);\n>\n> We're actually passing something I have called\n> \"pages_deleted_not_free\" here, which is derived from the bulk delete\n> stats in the obvious way that you see here (subtraction). I'm not\n> using pages_newly_deleted at all now. Note also that the behavior\n> inside _bt_update_meta_cleanup_info() no longer varies based on\n> whether it is called during btvacuumcleanup() or during btbulkdelete()\n> -- the same rules apply either way. We want to store\n> pages_deleted_not_free in the metapage at the end of btvacuumscan(),\n> no matter what.\n>\n> This same pages_deleted_not_free information is now used by\n> _bt_vacuum_needs_cleanup() in an obvious and simple way: if it's too\n> high (over 2.5%), then that will trigger a call to btbulkdelete() (we\n> won't skip scanning the index). Though in practice it probably won't\n> come up that often -- there just aren't ever that many deleted pages\n> in most indexes.\n\nThanks for your explanation. That makes sense to me.\n\n>\n> > IIUC if several btbulkdelete executions deleted index pages less than\n> > 2% of the index and those deleted pages could not be recycled yet,\n> > then the number of recyclable pages would exceed 2% of the index in\n> > total but amvacuumcleanup() would not trigger btvacuumscan() because\n> > the last newly deleted pages are less than the 2% threshold. I might\n> > be missing something though.\n>\n> I think you're right -- my idea of varying the behavior of\n> _bt_update_meta_cleanup_info() based on whether it's being called\n> during btvacuumcleanup() or during btbulkdelete() was a bad idea (FWIW\n> half the problem was that I explained the idea badly to begin with).\n> But, as I said, it's fixed in v3: we simply pass\n> \"pages_deleted_not_free\" as an argument to _bt_vacuum_needs_cleanup()\n> now.\n>\n> Does that make sense? Does it address this concern?\n\nYes!\n\n>\n> > Also, we need to note that having newly deleted pages doesn't\n> > necessarily mean these always are recyclable at that time. If the\n> > global xmin is still older than deleted page's btpo.xact values, we\n> > still could not recycle them. I think btm_oldest_btpo_xact probably\n> > will help this case. That is, we store the oldest btpo.xact among\n> > those newly deleted pages to btm_oldest_btpo_xact and we trigger\n> > btvacuumscan() if there are many newly deleted pages (more than 2% of\n> > index) and the btm_oldest_btpo_xact is older than the global xmin (I\n> > suppose each newly deleted pages could have different btpo.xact).\n>\n> I agree that having no XID in the metapage creates a new small\n> problem. Specifically, there are certain narrow cases that can cause\n> confusion in _bt_vacuum_needs_cleanup(). These cases didn't really\n> exist before my patch (kind of).\n>\n> The simplest example is easy to run into when debugging the patch on\n> your laptop. Because you're using your personal laptop, and not a real\n> production server, there will be no concurrent sessions that might\n> consume XIDs. You can run VACUUM VERBOSE manually several times, but\n> that alone will never be enough to enable VACUUM to recycle any of the\n> pages that the first VACUUM manages to delete (many to mark deleted,\n> reporting the pages as \"newly deleted\" via the new instrumentation\n> from the second patch). Note that the master branch is *also* unable\n> to recycle these deleted pages, simply because the \"safe xid\" never\n> gets old because there are no newly allocated XIDs to make it look old\n> (there are no allocated XIDs just because nothing else happens). That\n> in itself is not the new problem.\n>\n> The new problem is that _bt_vacuum_needs_cleanup() will no longer\n> notice that the oldest XID among deleted-but-not-yet-recycled pages is\n> so old that it will not be able to recycle the pages anyway -- at\n> least not the oldest page, though in this specific case that will\n> apply to all deleted pages equally. We might as well not bother trying\n> yet, which the old code \"gets right\" -- but it doesn't get it right\n> for any good reason. That is, the old code won't have VACUUM scan the\n> index at all, so it \"wins\" in this specific scenario.\n\nI'm on the same page.\n\n>\n> I think that's okay, though -- it's not a real problem, and actually\n> makes sense and has other advantages. This is why I believe it's okay:\n>\n> * We really should never VACUUM the same table before even one or two\n> XIDs are allocated -- that's what happens in the simple laptop test\n> scenario that I described. Surely we should not be too concerned about\n> \"doing the right thing\" under this totally artificial set of\n> conditions.\n\nRight.\n\n>\n> (BTW, I've been using txid_current() for my own \"laptop testing\", as a\n> way to work around this issue.)\n>\n> * More generally, if you really can't do recycling of pages that you\n> deleted during the last VACUUM during this VACUUM (perhaps because of\n> the presence of a long-running xact that holds open a snapshot), then\n> you have lots of *huge* problems already, and this is the least of\n> your concerns. Besides, at that point an affected VACUUM will be doing\n> work for an affected index through a btbulkdelete() call, so the\n> behavior of _bt_vacuum_needs_cleanup() becomes irrelevant.\n>\n\nI agree that there already are huge problems in that case. But I think\nwe need to consider an append-only case as well; after bulk deletion\non an append-only table, vacuum deletes heap tuples and index tuples,\nmarking some index pages as dead and setting an XID into btpo.xact.\nSince we trigger autovacuums even by insertions based on\nautovacuum_vacuum_insert_scale_factor/threshold autovacuum will run on\nthe table again. But if there is a long-running query a \"wasted\"\ncleanup scan could happen many times depending on the values of\nautovacuum_vacuum_insert_scale_factor/threshold and\nvacuum_cleanup_index_scale_factor. This should not happen in the old\ncode. I agree this is DBA problem but it also means this could bring\nanother new problem in a long-running query case.\n\n> * As you pointed out already, the oldest XID/deleted page from the\n> index may be significantly older than the newest. Why should we bucket\n> them together?\n\nI agree with this point.\n\n>\n> We could easily have a case where most of the deleted pages can be\n> recycled -- even when all indexes were originally marked deleted by\n> the same VACUUM operation. If there are lots of pages that actually\n> can be recycled, it is probably a bad thing to assume that the oldest\n> XID is representative of all of them. After all, with the patch we\n> only go out of our way to recycle deleted pages when we are almost\n> sure that the total number of recyclable pages (pages marked deleted\n> during a previous VACUUM) exceeds 2.5% of the total size of the index.\n> That broad constraint is important here -- if we do nothing unless\n> there are lots of deleted pages anyway, we are highly unlikely to ever\n> err on the side of being too eager (not eager enough seems more likely\n> to me).\n>\n> I think that we're justified in making a general assumption inside\n> _bt_vacuum_needs_cleanup() (which is documented at the point that we\n> call it, inside btvacuumscan()): The assumption that however many\n> index pages the metapage says we'll be able to recycle (whatever the\n> field says) will in fact turn out to be recyclable if we decide that\n> we need to. There are specific cases where that will be kind of wrong,\n> as I've gone into, but the assumption/design has many more advantages\n> than disadvantages.\n>\n> I have tried to capture this in v3 of the patch. Can you take a look?\n> See the new comments inside _bt_vacuum_needs_cleanup(). Plus the\n> comments when we call it inside btvacuumscan().\n\nI basically agreed with the change made in v3 patch. But I think it's\nprobably worth having a discussion on append-only table cases with\nautovacuums triggered by\nautovacuum_vacuum_insert_scale_factor/threshold.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Sat, 13 Feb 2021 13:38:18 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 12, 2021 at 8:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I agree that there already are huge problems in that case. But I think\n> we need to consider an append-only case as well; after bulk deletion\n> on an append-only table, vacuum deletes heap tuples and index tuples,\n> marking some index pages as dead and setting an XID into btpo.xact.\n> Since we trigger autovacuums even by insertions based on\n> autovacuum_vacuum_insert_scale_factor/threshold autovacuum will run on\n> the table again. But if there is a long-running query a \"wasted\"\n> cleanup scan could happen many times depending on the values of\n> autovacuum_vacuum_insert_scale_factor/threshold and\n> vacuum_cleanup_index_scale_factor. This should not happen in the old\n> code. I agree this is DBA problem but it also means this could bring\n> another new problem in a long-running query case.\n\nI see your point.\n\nThis will only not be a problem with the old code because the oldest\nXID in the metapage happens to restrict VACUUM in what turns out to be\nexactly perfect. But why assume that? It's actually rather unlikely\nthat we won't be able to free even one block, even in this scenario.\nThe oldest XID isn't truly special -- at least not without the\nrestrictions that go with 32-bit XIDs.\n\nThe other thing is that vacuum_cleanup_index_scale_factor is mostly\nabout limiting how long we'll go before having stale statistics, and\nso presumably the user gets the benefit of not having stale statistics\n(maybe that theory is a bit questionable in some cases, but that\ndoesn't have all that much to do with page deletion -- in fact the\nproblem exists without page deletion ever occuring).\n\nBTW, I am thinking about making recycling take place for pages that\nwere deleted during the same VACUUM. We can just use a\nwork_mem-limited array to remember a list of blocks that are deleted\nbut not yet recyclable (plus the XID found in the block). At the end\nof the VACUUM, (just before calling IndexFreeSpaceMapVacuum() from\nwithin btvacuumscan()), we can then determine which blocks are now\nsafe to recycle, and recycle them after all using some \"late\" calls to\nRecordFreeIndexPage() (and without revisiting the pages a second\ntime). No need to wait for the next VACUUM to recycle pages this way,\nat least in many common cases. The reality is that it usually doesn't\ntake very long for a deleted page to become recyclable -- why wait?\n\nThis idea is enabled by commit c79f6df75dd from 2018. I think it's the\nnext logical step.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 12 Feb 2021 21:04:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "сб, 13 февр. 2021 г. в 05:39, Masahiko Sawada <sawada.mshk@gmail.com>:\n\n> > (BTW, I've been using txid_current() for my own \"laptop testing\", as a\n> > way to work around this issue.)\n> >\n> > * More generally, if you really can't do recycling of pages that you\n> > deleted during the last VACUUM during this VACUUM (perhaps because of\n> > the presence of a long-running xact that holds open a snapshot), then\n> > you have lots of *huge* problems already, and this is the least of\n> > your concerns. Besides, at that point an affected VACUUM will be doing\n> > work for an affected index through a btbulkdelete() call, so the\n> > behavior of _bt_vacuum_needs_cleanup() becomes irrelevant.\n> >\n>\n> I agree that there already are huge problems in that case. But I think\n> we need to consider an append-only case as well; after bulk deletion\n> on an append-only table, vacuum deletes heap tuples and index tuples,\n> marking some index pages as dead and setting an XID into btpo.xact.\n> Since we trigger autovacuums even by insertions based on\n> autovacuum_vacuum_insert_scale_factor/threshold autovacuum will run on\n> the table again. But if there is a long-running query a \"wasted\"\n> cleanup scan could happen many times depending on the values of\n> autovacuum_vacuum_insert_scale_factor/threshold and\n> vacuum_cleanup_index_scale_factor. This should not happen in the old\n> code. I agree this is DBA problem but it also means this could bring\n> another new problem in a long-running query case.\n>\n\nI'd like to outline one relevant case.\n\nQuite often bulk deletes are done on a time series data (oldest) and\neffectively\nremoves a continuous chunk of data at the (physical) beginning of the table,\nthis is especially true for the append-only tables.\nAfter the delete, planning queries takes a long time, due to MergeJoin\nestimates\nare using IndexScans ( see\nhttps://postgr.es/m/17467.1426090533@sss.pgh.pa.us )\nRight now we have to disable MergeJoins via the ALTER SYSTEM to mitigate\nthis.\n\nSo I would, actually, like it very much for VACUUM to kick in sooner in\nsuch cases.\n\n-- \nVictor Yegorov\n\nсб, 13 февр. 2021 г. в 05:39, Masahiko Sawada <sawada.mshk@gmail.com>:> (BTW, I've been using txid_current() for my own \"laptop testing\", as a\n> way to work around this issue.)\n>\n> * More generally, if you really can't do recycling of pages that you\n> deleted during the last VACUUM during this VACUUM (perhaps because of\n> the presence of a long-running xact that holds open a snapshot), then\n> you have lots of *huge* problems already, and this is the least of\n> your concerns. Besides, at that point an affected VACUUM will be doing\n> work for an affected index through a btbulkdelete() call, so the\n> behavior of _bt_vacuum_needs_cleanup() becomes irrelevant.\n>\n\nI agree that there already are huge problems in that case. But I think\nwe need to consider an append-only case as well; after bulk deletion\non an append-only table, vacuum deletes heap tuples and index tuples,\nmarking some index pages as dead and setting an XID into btpo.xact.\nSince we trigger autovacuums even by insertions based on\nautovacuum_vacuum_insert_scale_factor/threshold autovacuum will run on\nthe table again. But if there is a long-running query a \"wasted\"\ncleanup scan could happen many times depending on the values of\nautovacuum_vacuum_insert_scale_factor/threshold and\nvacuum_cleanup_index_scale_factor. This should not happen in the old\ncode. I agree this is DBA problem but it also means this could bring\nanother new problem in a long-running query case.\nI'd like to outline one relevant case.Quite often bulk deletes are done on a time series data (oldest) and effectivelyremoves a continuous chunk of data at the (physical) beginning of the table,this is especially true for the append-only tables.After the delete, planning queries takes a long time, due to MergeJoin estimatesare using IndexScans ( see https://postgr.es/m/17467.1426090533@sss.pgh.pa.us )Right now we have to disable MergeJoins via the ALTER SYSTEM to mitigate this.So I would, actually, like it very much for VACUUM to kick in sooner in such cases.-- Victor Yegorov", "msg_date": "Sat, 13 Feb 2021 07:26:50 +0100", "msg_from": "Victor Yegorov <vyegorov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 12, 2021 at 10:27 PM Victor Yegorov <vyegorov@gmail.com> wrote:\n> I'd like to outline one relevant case.\n>\n> Quite often bulk deletes are done on a time series data (oldest) and effectively\n> removes a continuous chunk of data at the (physical) beginning of the table,\n> this is especially true for the append-only tables.\n> After the delete, planning queries takes a long time, due to MergeJoin estimates\n> are using IndexScans ( see https://postgr.es/m/17467.1426090533@sss.pgh.pa.us )\n> Right now we have to disable MergeJoins via the ALTER SYSTEM to mitigate this.\n>\n> So I would, actually, like it very much for VACUUM to kick in sooner in such cases.\n\nMasahiko was specifically concerned about workloads with\nbursty/uneven/mixed VACUUM triggering conditions -- he mentioned\nautovacuum_vacuum_insert_scale_factor/threshold as being applied to\ntrigger a second VACUUM (which follows from an initial VACUUM that\nperforms deletions following a bulk DELETE).\n\nA VACUUM that needs to delete index tuples will do its btvacuumscan()\nthrough the btbulkdelete() path, not through the btvacuumcleanup()\n\"cleanup only\" path. The btbulkdelete() path won't ever call\n_bt_vacuum_needs_cleanup() in the first place, and so there can be no\nrisk that the relevant changes (changes that the patch makes to that\nfunction) will have some new bad effect. The problem that you have\ndescribed seems very real, but it doesn't seem relevant to the\nspecific scenario that Masahiko expressed concern about. Nor does it\nseem relevant to this patch more generally.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 13 Feb 2021 21:02:12 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 12, 2021 at 9:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Feb 12, 2021 at 8:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I agree that there already are huge problems in that case. But I think\n> > we need to consider an append-only case as well; after bulk deletion\n> > on an append-only table, vacuum deletes heap tuples and index tuples,\n> > marking some index pages as dead and setting an XID into btpo.xact.\n> > Since we trigger autovacuums even by insertions based on\n> > autovacuum_vacuum_insert_scale_factor/threshold autovacuum will run on\n> > the table again. But if there is a long-running query a \"wasted\"\n> > cleanup scan could happen many times depending on the values of\n> > autovacuum_vacuum_insert_scale_factor/threshold and\n> > vacuum_cleanup_index_scale_factor. This should not happen in the old\n> > code. I agree this is DBA problem but it also means this could bring\n> > another new problem in a long-running query case.\n>\n> I see your point.\n\nMy guess is that this concern of yours is somehow related to how we do\ndeletion and recycling *in general*. Currently (and even in v3 of the\npatch), we assume that recycling the pages that a VACUUM operation\ndeletes will happen \"eventually\". This kind of makes sense when you\nhave \"typical vacuuming\" -- deletes/updates, and no big bursts, rare\nbulk deletes, etc.\n\nBut when you do have a mixture of different triggering positions,\nwhich is quite possible, it is difficult to understand what\n\"eventually\" actually means...\n\n> BTW, I am thinking about making recycling take place for pages that\n> were deleted during the same VACUUM. We can just use a\n> work_mem-limited array to remember a list of blocks that are deleted\n> but not yet recyclable (plus the XID found in the block).\n\n...which brings me back to this idea.\n\nI've prototyped this. It works really well. In most cases the\nprototype makes VACUUM operations with nbtree index page deletions\nalso recycle the pages that were deleted, at the end of the\nbtvacuumscan(). We do very little or no \"indefinite deferring\" work\nhere. This has obvious advantages, of course, but it also has a\nnon-obvious advantage: the awkward question of concerning \"what\neventually actually means\" with mixed triggering conditions over time\nmostly goes away. So perhaps this actually addresses your concern,\nMasahiko.\n\nI've been testing this with BenchmarkSQL [1], which has several\nindexes that regularly need page deletions. There is also a realistic\n\"life cycle\" to the data in these indexes. I added custom\ninstrumentation to display information about what's going on with page\ndeletion when the benchmark is run. I wrote a quick-and-dirty patch\nthat makes log_autovacuum show the same information that you see about\nindex page deletion when VACUUM VERBOSE is run (including the new\npages_newly_deleted field from my patch). With this particular\nTPC-C/BenchmarkSQL workload, VACUUM seems to consistently manage to go\non to place every page that it deletes in the FSM without leaving\nanything to the next VACUUM. There are a very small number of\nexceptions where we \"only\" manage to recycle maybe 95% of the pages\nthat were deleted.\n\nThe race condition that nbtree avoids by deferring recycling was\nalways a narrow one, outside of the extremes -- the way we defer has\nalways been overkill. It's almost always unnecessary to delay placing\ndeleted pages in the FSM until the *next* VACUUM. We only have to\ndelay it until the end of the *same* VACUUM -- why wait until the next\nVACUUM if we don't have to? In general this deferring recycling\nbusiness has nothing to do with MVCC/GC/whatever, and yet the code\nseems to suggest that it does. While it is convenient to use an XID\nfor page deletion and recycling as a way of implementing what Lanin &\nShasha call \"the drain technique\" [2], all we have to do is prevent\ncertain race conditions. This is all about the index itself, the data\nstructure, how it is maintained -- nothing more. It almost seems\nobvious to me.\n\nIt's still possible to imagine extremes. Extremes that even the \"try\nto recycle pages we ourselves deleted when we reach the end of\nbtvacuumscan()\" version of my patch cannot deal with. Maybe it really\nis true that it's inherently impossible to recycle a deleted page even\nat the end of a VACUUM -- maybe a long-running transaction (that could\nin principle have a stale link to our deleted page) starts before we\nVACUUM, and lasts after VACUUM finishes. So it's just not safe. When\nthat happens, we're back to having the original problem: we're relying\non some *future* VACUUM operation to do that for us at some indefinite\npoint in the future. It's fair to wonder: What are the implications of\nthat? Are we not back to square one? Don't we have the same \"what does\n'eventually' really mean\" problem once again?\n\nI think that that's okay, because this remaining case is a *truly*\nextreme case (especially with a large index, where index vacuuming\nwill naturally take a long time).\n\nIt will be rare. But more importantly, the fact that scenario is now\nan extreme case justifies treating it as an extreme case. We can teach\n_bt_vacuum_needs_cleanup() to recognize it as an extreme case, too. In\nparticular, I think that it will now be okay to increase the threshold\napplied when considering deleted pages inside\n_bt_vacuum_needs_cleanup(). It was 2.5% of the index size in v3 of the\npatch. But in v4, which has the new recycling enhancement, I think\nthat it would be sensible to make it 5%, or maybe even 10%. This\nnaturally makes Masahiko's problem scenario unlikely to actually\nresult in a truly wasted call to btvacuumscan(). The number of pages\nthat the metapage indicates are \"deleted but not yet placed in the\nFSM\" will be close to the theoretical minimum, because we're no longer\nnaively throwing away information about which specific pages will be\nrecyclable soon. Which is what the current approach does, really.\n\n[1] https://github.com/wieck/benchmarksql\n[2] https://archive.org/stream/symmetricconcurr00lani#page/8/mode/2up\n-- see \"2.5 Freeing Empty Nodes\"\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sat, 13 Feb 2021 22:47:13 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Sat, Feb 13, 2021 at 10:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It will be rare. But more importantly, the fact that scenario is now\n> an extreme case justifies treating it as an extreme case. We can teach\n> _bt_vacuum_needs_cleanup() to recognize it as an extreme case, too. In\n> particular, I think that it will now be okay to increase the threshold\n> applied when considering deleted pages inside\n> _bt_vacuum_needs_cleanup(). It was 2.5% of the index size in v3 of the\n> patch. But in v4, which has the new recycling enhancement, I think\n> that it would be sensible to make it 5%, or maybe even 10%. This\n> naturally makes Masahiko's problem scenario unlikely to actually\n> result in a truly wasted call to btvacuumscan().\n\nAttached is v4, which has the \"recycle pages that we ourselves deleted\nduring this same VACUUM operation\" enhancement. It also doubles the\n_bt_vacuum_needs_cleanup() threshold applied to deleted pages -- it\ngoes from 2.5% to 5%. The new patch is the patch series (v4-0002-*)\ncertainly needs more polishing. I'm posting what I have now because v3\nhas bitrot.\n\nBenchmarking has shown that the enhancement in v4-0002-* can\nsignificantly reduce the amount of index bloat in two of the\nBenchmarkSQL/TPC-C indexes.\n\n-- \nPeter Geoghegan", "msg_date": "Sun, 14 Feb 2021 20:39:53 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Sun, Feb 14, 2021 at 3:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Feb 12, 2021 at 9:04 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Fri, Feb 12, 2021 at 8:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > I agree that there already are huge problems in that case. But I think\n> > > we need to consider an append-only case as well; after bulk deletion\n> > > on an append-only table, vacuum deletes heap tuples and index tuples,\n> > > marking some index pages as dead and setting an XID into btpo.xact.\n> > > Since we trigger autovacuums even by insertions based on\n> > > autovacuum_vacuum_insert_scale_factor/threshold autovacuum will run on\n> > > the table again. But if there is a long-running query a \"wasted\"\n> > > cleanup scan could happen many times depending on the values of\n> > > autovacuum_vacuum_insert_scale_factor/threshold and\n> > > vacuum_cleanup_index_scale_factor. This should not happen in the old\n> > > code. I agree this is DBA problem but it also means this could bring\n> > > another new problem in a long-running query case.\n> >\n> > I see your point.\n>\n> My guess is that this concern of yours is somehow related to how we do\n> deletion and recycling *in general*. Currently (and even in v3 of the\n> patch), we assume that recycling the pages that a VACUUM operation\n> deletes will happen \"eventually\". This kind of makes sense when you\n> have \"typical vacuuming\" -- deletes/updates, and no big bursts, rare\n> bulk deletes, etc.\n>\n> But when you do have a mixture of different triggering positions,\n> which is quite possible, it is difficult to understand what\n> \"eventually\" actually means...\n>\n> > BTW, I am thinking about making recycling take place for pages that\n> > were deleted during the same VACUUM. We can just use a\n> > work_mem-limited array to remember a list of blocks that are deleted\n> > but not yet recyclable (plus the XID found in the block).\n>\n> ...which brings me back to this idea.\n>\n> I've prototyped this. It works really well. In most cases the\n> prototype makes VACUUM operations with nbtree index page deletions\n> also recycle the pages that were deleted, at the end of the\n> btvacuumscan(). We do very little or no \"indefinite deferring\" work\n> here. This has obvious advantages, of course, but it also has a\n> non-obvious advantage: the awkward question of concerning \"what\n> eventually actually means\" with mixed triggering conditions over time\n> mostly goes away. So perhaps this actually addresses your concern,\n> Masahiko.\n\nYes. I think this would simplify the problem by resolving almost all\nproblems related to indefinite deferring page recycle.\n\nWe will be able to recycle almost all just-deleted pages in practice\nespecially when btvacuumscan() took a long time. And there would not\nbe a noticeable downside, I think.\n\nBTW if btree index starts to use maintenan_work_mem for this purpose,\nwe also need to set amusemaintenanceworkmem to true which is\nconsidered when parallel vacuum.\n\n>\n> I've been testing this with BenchmarkSQL [1], which has several\n> indexes that regularly need page deletions. There is also a realistic\n> \"life cycle\" to the data in these indexes. I added custom\n> instrumentation to display information about what's going on with page\n> deletion when the benchmark is run. I wrote a quick-and-dirty patch\n> that makes log_autovacuum show the same information that you see about\n> index page deletion when VACUUM VERBOSE is run (including the new\n> pages_newly_deleted field from my patch). With this particular\n> TPC-C/BenchmarkSQL workload, VACUUM seems to consistently manage to go\n> on to place every page that it deletes in the FSM without leaving\n> anything to the next VACUUM. There are a very small number of\n> exceptions where we \"only\" manage to recycle maybe 95% of the pages\n> that were deleted.\n\nGreat!\n\n>\n> The race condition that nbtree avoids by deferring recycling was\n> always a narrow one, outside of the extremes -- the way we defer has\n> always been overkill. It's almost always unnecessary to delay placing\n> deleted pages in the FSM until the *next* VACUUM. We only have to\n> delay it until the end of the *same* VACUUM -- why wait until the next\n> VACUUM if we don't have to? In general this deferring recycling\n> business has nothing to do with MVCC/GC/whatever, and yet the code\n> seems to suggest that it does. While it is convenient to use an XID\n> for page deletion and recycling as a way of implementing what Lanin &\n> Shasha call \"the drain technique\" [2], all we have to do is prevent\n> certain race conditions. This is all about the index itself, the data\n> structure, how it is maintained -- nothing more. It almost seems\n> obvious to me.\n\nAgreed.\n\n>\n> It's still possible to imagine extremes. Extremes that even the \"try\n> to recycle pages we ourselves deleted when we reach the end of\n> btvacuumscan()\" version of my patch cannot deal with. Maybe it really\n> is true that it's inherently impossible to recycle a deleted page even\n> at the end of a VACUUM -- maybe a long-running transaction (that could\n> in principle have a stale link to our deleted page) starts before we\n> VACUUM, and lasts after VACUUM finishes. So it's just not safe. When\n> that happens, we're back to having the original problem: we're relying\n> on some *future* VACUUM operation to do that for us at some indefinite\n> point in the future. It's fair to wonder: What are the implications of\n> that? Are we not back to square one? Don't we have the same \"what does\n> 'eventually' really mean\" problem once again?\n>\n> I think that that's okay, because this remaining case is a *truly*\n> extreme case (especially with a large index, where index vacuuming\n> will naturally take a long time).\n\nRight.\n\n>\n> It will be rare. But more importantly, the fact that scenario is now\n> an extreme case justifies treating it as an extreme case. We can teach\n> _bt_vacuum_needs_cleanup() to recognize it as an extreme case, too. In\n> particular, I think that it will now be okay to increase the threshold\n> applied when considering deleted pages inside\n> _bt_vacuum_needs_cleanup(). It was 2.5% of the index size in v3 of the\n> patch. But in v4, which has the new recycling enhancement, I think\n> that it would be sensible to make it 5%, or maybe even 10%. This\n> naturally makes Masahiko's problem scenario unlikely to actually\n> result in a truly wasted call to btvacuumscan(). The number of pages\n> that the metapage indicates are \"deleted but not yet placed in the\n> FSM\" will be close to the theoretical minimum, because we're no longer\n> naively throwing away information about which specific pages will be\n> recyclable soon. Which is what the current approach does, really.\n>\n\nYeah, increasing the threshold would solve the problem in most cases.\nGiven that nbtree index page deletion is unlikely to happen in\npractice, having the threshold 5% or 10% seems to avoid the problem in\nnearly 100% of cases, I think.\n\nAnother idea I come up with (maybe on top of above your idea) is to\nchange btm_oldest_btpo_xact to 64-bit XID and store the *newest*\nbtpo.xact XID among all deleted pages when the total amount of deleted\npages exceeds 2% of index. That way, we surely can recycle more than\n2% of index when the XID becomes older than the global xmin.\n\nAlso, maybe we can record deleted pages to FSM even without deferring\nand check it when re-using. That is, when we get a free page from FSM\nwe check if the page is really recyclable (maybe _bt_getbuf() already\ndoes this?). IOW, a deleted page can be recycled only when it's\nrequested to be reused. If btpo.xact is 64-bit XID we never need to\nworry about the case where a deleted page never be requested to be\nreused.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 15 Feb 2021 20:14:48 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Feb 15, 2021 at 3:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Yes. I think this would simplify the problem by resolving almost all\n> problems related to indefinite deferring page recycle.\n>\n> We will be able to recycle almost all just-deleted pages in practice\n> especially when btvacuumscan() took a long time. And there would not\n> be a noticeable downside, I think.\n\nGreat!\n\n> BTW if btree index starts to use maintenan_work_mem for this purpose,\n> we also need to set amusemaintenanceworkmem to true which is\n> considered when parallel vacuum.\n\nI was just going to use work_mem. This should be okay. Note that\nCREATE INDEX uses an additional work_mem allocation when building a\nunique index, for the second spool/tuplesort. That seems like a\nprecedent that I can follow here.\n\nRight now the BTPendingRecycle struct the patch uses to store\ninformation about a page that the current VACUUM deleted (and may yet\nbe able to place in the FSM) are each 16 bytes (including alignment\noverhead). I could probably make them smaller with a little work, but\neven now that's quite small. Even with the default 4MiB work_mem\nsetting we can fit information about 262144 pages all at once. That's\n2GiB worth of deleted index pages, which is generally much more than\nwe'll need.\n\n> Yeah, increasing the threshold would solve the problem in most cases.\n> Given that nbtree index page deletion is unlikely to happen in\n> practice, having the threshold 5% or 10% seems to avoid the problem in\n> nearly 100% of cases, I think.\n\nOf course it all depends on workload/index characteristics, in the\nend. It is very rare to delete a percentage of the index that exceeds\nautovacuum_vacuum_scale_factor -- that's the important thing here IMV.\n\n> Another idea I come up with (maybe on top of above your idea) is to\n> change btm_oldest_btpo_xact to 64-bit XID and store the *newest*\n> btpo.xact XID among all deleted pages when the total amount of deleted\n> pages exceeds 2% of index. That way, we surely can recycle more than\n> 2% of index when the XID becomes older than the global xmin.\n\nYou could make my basic approach to recycling deleted pages earlier\n(ideally at the end of the same btvacuumscan() that deleted the pages\nin the first place) more sophisticated in a variety of ways. These are\nall subject to diminishing returns, though.\n\nI've already managed to recycle close to 100% of all B-Tree pages\nduring the same VACUUM with a very simple approach -- at least if we\nassume BenchmarkSQL is representative. It is hard to know how much\nmore effort can be justified. To be clear, I'm not saying that an\nimproved design cannot be justified now or in the future (BenchmarkSQL\nis not like many workloads that people use Postgres for). I'm just\nsaying that I *don't know* where to draw the line. Any particular\nplace that we draw the line feels a little arbitrary to me. This\nincludes my own choice of the work_mem-limited BTPendingRecycle array.\nMy patch currently works that way because it's simple -- no other\nreason.\n\nAny scheme to further improve the \"work_mem-limited BTPendingRecycle\narray\" design from my patch boils down to this: A new approach that\nmakes recycling of any remaining deleted pages take place \"before too\nlong\": After the end of the btvacuumscan() BTPendingRecycle array\nstuff (presumably that didn't work out in cases where an improved\napproach matters), but before the next VACUUM takes place (since that\nwill do the required recycling anyway, unless it's unable to do any\nwork at all, in which case it hardly matters). Here are two ideas of\nmy own in this same class as your idea:\n\n1. Remember to do some of the BTPendingRecycle array FSM processing\nstuff in btvacuumcleanup() -- defer some of the recycling of pages\nrecorded in BTPendingRecycle entries (paged deleted during\nbtbulkdelete() for the same VACUUM) until btvacuumcleanup() is called.\n\nRight now btvacuumcleanup() will always do nothing when btbulkdelete()\nwas called earlier. But that's just a current nbtree convention, and\nis no reason to not do this (we don't have to scan the index again at\nall). The advantage of teaching btvacuumcleanup() to do this is that\nit delays the \"BTPendingRecycle array FSM processing\" stuff until the\nlast moment that it is still easy to use the in-memory array (because\nwe haven't freed it yet). In general, doing it later makes it more\nlikely that we'll successfully recycle the pages. Though in general it\nmight not make any difference -- so we're still hoping that the\nworkload allows us to recycle everything we deleted, without making\nthe design much more complicated than what I posted already.\n\n(BTW I see that you reviewed commit 4e514c61, so you must have thought\nabout the trade-off between doing deferred recycling in\namvacuumcleanup() vs ambulkdelete(), when to call\nIndexFreeSpaceMapVacuum(), etc. But there is no reason why we cannot\nimplement this idea while calling IndexFreeSpaceMapVacuum() during\nboth btvacuumcleanup() and btbulkdelete(), so that we get the best of\nboth worlds -- fast recycling *and* more delayed processing that is\nmore likely to ultimately succeed.)\n\n2. Remember/serialize the BTPendingRecycle array when we realize that\nwe cannot put all recyclable pages in the FSM at the end of the\ncurrent btvacuumscan(), and then use an autovacuum work item to\nprocess them before too long -- a call to AutoVacuumRequestWork()\ncould even serialize the data on disk.\n\nIdea 2 has the advantage of allowing retries -- eventually it will be\nsafe to recycle the pages, if we just wait long enough.\n\nAnyway, I'm probably not going to pursue either of the 2 ideas for\nPostgres 14. I'm mentioning these ideas now because the trade-offs\nshow that there is no perfect design for this deferring recycling\nstuff. Whatever we do, we should accept that there is no perfect\ndesign.\n\nActually, there is one more reason why I bring up idea 1 now: I want\nto hear your thoughts on the index AM API questions now, which idea 1\ntouches on. Ideally all of the details around the index AM VACUUM APIs\n(i.e. when and where the extra work happens -- btvacuumcleanup() vs\nbtbulkdelete()) won't need to change much in the future. I worry about\ngetting this index AM API stuff right, at least a little.\n\n> Also, maybe we can record deleted pages to FSM even without deferring\n> and check it when re-using. That is, when we get a free page from FSM\n> we check if the page is really recyclable (maybe _bt_getbuf() already\n> does this?). IOW, a deleted page can be recycled only when it's\n> requested to be reused. If btpo.xact is 64-bit XID we never need to\n> worry about the case where a deleted page never be requested to be\n> reused.\n\nI've thought about that too (both now and in the past). You're right\nabout _bt_getbuf() -- it checks the XID, at least on the master\nbranch. I took that XID check out in v4 of the patch, but I am now\nstarting to have my doubts about that particular choice. (I'm probably\ngoing to restore the XID check in _bt_getbuf in v5 of the patch.)\n\nI took the XID-is-recyclable check out in v4 of the patch because it\nmight leak pages in rare cases -- which is not a new problem.\n_bt_getbuf() currently has a remarkably relaxed attitude about leaking\npages from the FSM (it is more relaxed about it than I am, certainly)\n-- but why should we just accept leaking pages like that? My new\ndoubts about it are non-specific, though. We know that the FSM isn't\ncrash safe -- but I think that that reduces to \"practically speaking,\nwe can never 100% trust the FSM\". Which makes me nervous. I worry that\nthe FSM can do something completely evil and crazy in rare cases.\n\nIt's not just crash safety. The FSM's fsm_search_avail() function\ncurrently changes the fp_next_slot field with only a shared buffer\nlock held. It's an int, which is supposed to \"be atomic on most\nplatforms\". But we should be using real atomic ops. So the FSM is\ngenerally...kind of wonky.\n\nIn an ideal world, nbtree page deletion + recycling would have crash\nsafety built in. I don't think that it makes sense to not have free\nspace management without crash safety in the case of index AMs,\nbecause it's just not worth it with whole-page units of free space\n(heapam is another story). A 100% crash-safe design would naturally\nshift the problem of nbtree page recycle safety from the\nproducer/VACUUM side, to the consumer/_bt_getbuf() side, which I agree\nwould be a real improvement. But these long standing FSM issues are\nnot going to change for Postgres 14. And so changing _bt_getbuf() to\ndo clever things with XIDs won't be possible for Postgres 14 IMV.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Feb 2021 19:26:03 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Feb 15, 2021 at 7:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually, there is one more reason why I bring up idea 1 now: I want\n> to hear your thoughts on the index AM API questions now, which idea 1\n> touches on. Ideally all of the details around the index AM VACUUM APIs\n> (i.e. when and where the extra work happens -- btvacuumcleanup() vs\n> btbulkdelete()) won't need to change much in the future. I worry about\n> getting this index AM API stuff right, at least a little.\n\nSpeaking of problems like this, I think I spotted an old one: we call\n_bt_update_meta_cleanup_info() in either btbulkdelete() or\nbtvacuumcleanup(). I think that we should always call it in\nbtvacuumcleanup(), though -- even in cases where there is no call to\nbtvacuumscan() inside btvacuumcleanup() (because btvacuumscan()\nhappened earlier instead, during the btbulkdelete() call).\n\nThis makes the value of IndexVacuumInfo.num_heap_tuples (which is what\nwe store in the metapage) much more accurate -- right now it's always\npg_class.reltuples from *before* the VACUUM started. And so the\nbtm_last_cleanup_num_heap_tuples value in a nbtree metapage is often\nkind of inaccurate.\n\nThis \"estimate during ambulkdelete\" issue is documented here (kind of):\n\n/*\n * Struct for input arguments passed to ambulkdelete and amvacuumcleanup\n *\n * num_heap_tuples is accurate only when estimated_count is false;\n * otherwise it's just an estimate (currently, the estimate is the\n * prior value of the relation's pg_class.reltuples field, so it could\n * even be -1). It will always just be an estimate during ambulkdelete.\n */\ntypedef struct IndexVacuumInfo\n{\n ...\n}\n\nThe name of the metapage field is already\nbtm_last_cleanup_num_heap_tuples, which already suggests the approach\nthat I propose now. So why don't we do it like that already?\n\n(Thinks some more...)\n\nI wonder: did this detail change at the last minute during the\ndevelopment of the feature (just before commit 857f9c36) back in early\n2018? That change have made it easier to deal with\noldestBtpoXact/btm_oldest_btpo_xact, which IIRC was a late addition to\nthe patch -- so maybe it's truly an accident that the code doesn't\nwork the way that I suggest it should already. (It's annoying to make\nstate from btbulkdelete() appear in btvacuumcleanup(), unless it's\nfrom IndexVacuumInfo or something -- I can imagine this changing at\nthe last minute, just for that reason.)\n\nDo you think that this needs to be treated as a bug in the\nbackbranches, Masahiko? I'm not sure...\n\nIn any case we should probably make this change as part of Postgres\n14. Don't you think? It's certainly easy to do it this way now, since\nthere will be no need to keep around a oldestBtpoXact value until\nbtvacuumcleanup() (in the common case where btbulkdelete() is where we\ncall btvacuumscan()). The new btm_last_cleanup_num_delpages field\n(which replaces btm_oldest_btpo_xact) has a value that just comes from\nthe bulk stats, which is easy anyway.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Feb 2021 22:52:07 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 16, 2021 at 3:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Feb 15, 2021 at 7:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Actually, there is one more reason why I bring up idea 1 now: I want\n> > to hear your thoughts on the index AM API questions now, which idea 1\n> > touches on. Ideally all of the details around the index AM VACUUM APIs\n> > (i.e. when and where the extra work happens -- btvacuumcleanup() vs\n> > btbulkdelete()) won't need to change much in the future. I worry about\n> > getting this index AM API stuff right, at least a little.\n>\n> Speaking of problems like this, I think I spotted an old one: we call\n> _bt_update_meta_cleanup_info() in either btbulkdelete() or\n> btvacuumcleanup(). I think that we should always call it in\n> btvacuumcleanup(), though -- even in cases where there is no call to\n> btvacuumscan() inside btvacuumcleanup() (because btvacuumscan()\n> happened earlier instead, during the btbulkdelete() call).\n>\n> This makes the value of IndexVacuumInfo.num_heap_tuples (which is what\n> we store in the metapage) much more accurate -- right now it's always\n> pg_class.reltuples from *before* the VACUUM started. And so the\n> btm_last_cleanup_num_heap_tuples value in a nbtree metapage is often\n> kind of inaccurate.\n>\n> This \"estimate during ambulkdelete\" issue is documented here (kind of):\n>\n> /*\n> * Struct for input arguments passed to ambulkdelete and amvacuumcleanup\n> *\n> * num_heap_tuples is accurate only when estimated_count is false;\n> * otherwise it's just an estimate (currently, the estimate is the\n> * prior value of the relation's pg_class.reltuples field, so it could\n> * even be -1). It will always just be an estimate during ambulkdelete.\n> */\n> typedef struct IndexVacuumInfo\n> {\n> ...\n> }\n>\n> The name of the metapage field is already\n> btm_last_cleanup_num_heap_tuples, which already suggests the approach\n> that I propose now. So why don't we do it like that already?\n>\n> (Thinks some more...)\n>\n> I wonder: did this detail change at the last minute during the\n> development of the feature (just before commit 857f9c36) back in early\n> 2018? That change have made it easier to deal with\n> oldestBtpoXact/btm_oldest_btpo_xact, which IIRC was a late addition to\n> the patch -- so maybe it's truly an accident that the code doesn't\n> work the way that I suggest it should already. (It's annoying to make\n> state from btbulkdelete() appear in btvacuumcleanup(), unless it's\n> from IndexVacuumInfo or something -- I can imagine this changing at\n> the last minute, just for that reason.)\n>\n> Do you think that this needs to be treated as a bug in the\n> backbranches, Masahiko? I'm not sure...\n\nUgh, yes, I think it's a bug.\n\nWhen developing this feature, in an old version patch, we used to set\ninvalid values to both btm_oldest_btpo_xact and\nbtm_last_cleanup_num_heap_tuples in btbulkdelete() to reset these\nvalues. But we decided to set valid values to both even in\nbtbulkdelete(). I believe that decision was correct in terms of\nbtm_oldest_btpo_xact because with the old version patch we will do an\nunnecessary index scan during btvacuumcleanup(). But it’s wrong in\nterms of btm_last_cleanup_num_heap_tuples, as you pointed out.\n\nThis bug would make the check of vacuum_cleanup_index_scale_factor\nuntrust. So I think it’s better to backpatch but I think we need to\nnote that to fix this issue properly, in a case where a vacuum called\nbtbulkdelete() earlier, probably we should update only\nbtm_oldest_btpo_xact in btbulkdelete() and then update\nbtm_last_cleanup_num_heap_tuples in btvacuumcleanup(). In this case,\nwe don’t know the oldest btpo.xact among the deleted pages in\nbtvacuumcleanup(). This means that we would need to update the meta\npage twice, leading to WAL logging twice. Since we already could\nupdate the meta page more than once when a vacuum calls btbulkdelete()\nmultiple times I think it would not be a problem, though.\n\n>\n> In any case we should probably make this change as part of Postgres\n> 14. Don't you think? It's certainly easy to do it this way now, since\n> there will be no need to keep around a oldestBtpoXact value until\n> btvacuumcleanup() (in the common case where btbulkdelete() is where we\n> call btvacuumscan()). The new btm_last_cleanup_num_delpages field\n> (which replaces btm_oldest_btpo_xact) has a value that just comes from\n> the bulk stats, which is easy anyway.\n\nAgreed.\n\nAs I mentioned above, we might need to consider how btbulkdelete() can\ntell btvacuumcleanup() btm_last_cleanup_num_delpages in a case where a\nvacuum called btbulkdelete earlier. During parallel vacuum, two\ndifferent processes could do btbulkdelete() and btvacuumcleanup()\nrespectively. Updating those values separately in those callbacks\nwould be straightforward.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 16 Feb 2021 21:16:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 16, 2021 at 4:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Ugh, yes, I think it's a bug.\n\nI was actually thinking of a similar bug in nbtree deduplication when\nI spotted this one -- see commit 48e12913. The index AM API stuff is\ntricky.\n\n> When developing this feature, in an old version patch, we used to set\n> invalid values to both btm_oldest_btpo_xact and\n> btm_last_cleanup_num_heap_tuples in btbulkdelete() to reset these\n> values. But we decided to set valid values to both even in\n> btbulkdelete(). I believe that decision was correct in terms of\n> btm_oldest_btpo_xact because with the old version patch we will do an\n> unnecessary index scan during btvacuumcleanup(). But it’s wrong in\n> terms of btm_last_cleanup_num_heap_tuples, as you pointed out.\n\nRight.\n\n> This bug would make the check of vacuum_cleanup_index_scale_factor\n> untrust. So I think it’s better to backpatch but I think we need to\n> note that to fix this issue properly, in a case where a vacuum called\n> btbulkdelete() earlier, probably we should update only\n> btm_oldest_btpo_xact in btbulkdelete() and then update\n> btm_last_cleanup_num_heap_tuples in btvacuumcleanup(). In this case,\n> we don’t know the oldest btpo.xact among the deleted pages in\n> btvacuumcleanup(). This means that we would need to update the meta\n> page twice, leading to WAL logging twice. Since we already could\n> update the meta page more than once when a vacuum calls btbulkdelete()\n> multiple times I think it would not be a problem, though.\n\nI agree that that approach is fine. Realistically, we won't even have\nto update the metapage twice in most cases. Because most indexes never\nhave even one page deletion anyway.\n\n> As I mentioned above, we might need to consider how btbulkdelete() can\n> tell btvacuumcleanup() btm_last_cleanup_num_delpages in a case where a\n> vacuum called btbulkdelete earlier. During parallel vacuum, two\n> different processes could do btbulkdelete() and btvacuumcleanup()\n> respectively. Updating those values separately in those callbacks\n> would be straightforward.\n\nI don't see why it should be a problem for my patch/Postgres 14,\nbecause we don't have the same btpo.xact/oldestBtpoXact issue that the\noriginal Postgres 11 commit dealt with. The patch determines a value\nfor btm_last_cleanup_num_delpages (which I call\npages_deleted_not_free) by subtracting fields from the bulk delete\nstats: we just use \"stats->pages_deleted - stats->pages_free\".\n\nIsn't btvacuumcleanup() (or any other amvacuumcleanup() routine)\nentitled to rely on the bulk delete stats being set in the way I've\ndescribed? I assumed that that was okay in general, but I haven't\ntested parallel VACUUM specifically. Will parallel VACUUM really fail\nto ensure that values in bulk stats fields (like pages_deleted and\npages_free) get set correctly for amvacuumcleanup() callbacks?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Feb 2021 11:35:03 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 16, 2021 at 11:35 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Isn't btvacuumcleanup() (or any other amvacuumcleanup() routine)\n> entitled to rely on the bulk delete stats being set in the way I've\n> described? I assumed that that was okay in general, but I haven't\n> tested parallel VACUUM specifically. Will parallel VACUUM really fail\n> to ensure that values in bulk stats fields (like pages_deleted and\n> pages_free) get set correctly for amvacuumcleanup() callbacks?\n\nI tested the pages_deleted_not_free stuff with a version of my patch\nthat consistently calls _bt_update_meta_cleanup_info() during\nbtvacuumcleanup(), and never during btbulkdelete(). And it works just\nfine -- including with parallel VACUUM.\n\nEvidently my understanding of what btvacuumcleanup() (or any other\namvacuumcleanup() routine) can expect from bulk delete stats was\ncorrect. It doesn't matter whether or not parallel VACUUM happens to\nbe involved -- it works just as well.\n\nThis is good news, since of course it means that it's okay to stick to\nthe simple approach of calculating pages_deleted_not_free. Passing\npages_deleted_not_free (a.k.a. btm_last_cleanup_num_delpages) to\n_bt_update_meta_cleanup_info() during btvacuumcleanup() works just as\nwell when combined with my fix for the the\n\"IndexVacuumInfo.num_heap_tuples is inaccurate during btbulkdelete()\"\nbug. That approach to fixing the IndexVacuumInfo.num_heap_tuples bug\ncreates no new problems for my patch. There is still no need to think\nabout when or how the relevant bulk delete fields (pages_deleted and\npages_free) were set. And it doesn't matter whether or not parallel\nVACUUM is involved.\n\n(Of course it's also true that we can't do that on the backbranches.\nPurely because we must worry about btpo.xact/oldestBtpoXact on the\nbackbranches. We'll probably have to teach the code in released\nversions to set btm_oldest_btpo_xact and\nbtm_last_cleanup_num_heap_tuples in separate calls -- since there is\nno easy way to \"send\" the oldestBtpoXact value determined during a\nbtbulkdelete() to a later corresponding btvacuumcleanup(). That's a\nbit of a kludge, but I'm not worried about it.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Feb 2021 12:41:04 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Feb 17, 2021 at 5:41 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Feb 16, 2021 at 11:35 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Isn't btvacuumcleanup() (or any other amvacuumcleanup() routine)\n> > entitled to rely on the bulk delete stats being set in the way I've\n> > described? I assumed that that was okay in general, but I haven't\n> > tested parallel VACUUM specifically. Will parallel VACUUM really fail\n> > to ensure that values in bulk stats fields (like pages_deleted and\n> > pages_free) get set correctly for amvacuumcleanup() callbacks?\n>\n> I tested the pages_deleted_not_free stuff with a version of my patch\n> that consistently calls _bt_update_meta_cleanup_info() during\n> btvacuumcleanup(), and never during btbulkdelete(). And it works just\n> fine -- including with parallel VACUUM.\n>\n> Evidently my understanding of what btvacuumcleanup() (or any other\n> amvacuumcleanup() routine) can expect from bulk delete stats was\n> correct. It doesn't matter whether or not parallel VACUUM happens to\n> be involved -- it works just as well.\n\nYes, you're right. I missed that pages_deleted_not_free is calculated\nby (stats->pages_deleted - stats->pages_free) where both are in\nIndexBulkDeleteResult.\n\n>\n> This is good news, since of course it means that it's okay to stick to\n> the simple approach of calculating pages_deleted_not_free. Passing\n> pages_deleted_not_free (a.k.a. btm_last_cleanup_num_delpages) to\n> _bt_update_meta_cleanup_info() during btvacuumcleanup() works just as\n> well when combined with my fix for the the\n> \"IndexVacuumInfo.num_heap_tuples is inaccurate during btbulkdelete()\"\n> bug. That approach to fixing the IndexVacuumInfo.num_heap_tuples bug\n> creates no new problems for my patch. There is still no need to think\n> about when or how the relevant bulk delete fields (pages_deleted and\n> pages_free) were set. And it doesn't matter whether or not parallel\n> VACUUM is involved.\n\nAgreed.\n\n>\n> (Of course it's also true that we can't do that on the backbranches.\n> Purely because we must worry about btpo.xact/oldestBtpoXact on the\n> backbranches. We'll probably have to teach the code in released\n> versions to set btm_oldest_btpo_xact and\n> btm_last_cleanup_num_heap_tuples in separate calls -- since there is\n> no easy way to \"send\" the oldestBtpoXact value determined during a\n> btbulkdelete() to a later corresponding btvacuumcleanup(). That's a\n> bit of a kludge, but I'm not worried about it.)\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 18 Feb 2021 19:38:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 16, 2021 at 12:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Feb 15, 2021 at 3:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Yes. I think this would simplify the problem by resolving almost all\n> > problems related to indefinite deferring page recycle.\n> >\n> > We will be able to recycle almost all just-deleted pages in practice\n> > especially when btvacuumscan() took a long time. And there would not\n> > be a noticeable downside, I think.\n>\n> Great!\n>\n> > BTW if btree index starts to use maintenan_work_mem for this purpose,\n> > we also need to set amusemaintenanceworkmem to true which is\n> > considered when parallel vacuum.\n>\n> I was just going to use work_mem. This should be okay. Note that\n> CREATE INDEX uses an additional work_mem allocation when building a\n> unique index, for the second spool/tuplesort. That seems like a\n> precedent that I can follow here.\n>\n> Right now the BTPendingRecycle struct the patch uses to store\n> information about a page that the current VACUUM deleted (and may yet\n> be able to place in the FSM) are each 16 bytes (including alignment\n> overhead). I could probably make them smaller with a little work, but\n> even now that's quite small. Even with the default 4MiB work_mem\n> setting we can fit information about 262144 pages all at once. That's\n> 2GiB worth of deleted index pages, which is generally much more than\n> we'll need.\n\nCool.\n\n>\n> > Yeah, increasing the threshold would solve the problem in most cases.\n> > Given that nbtree index page deletion is unlikely to happen in\n> > practice, having the threshold 5% or 10% seems to avoid the problem in\n> > nearly 100% of cases, I think.\n>\n> Of course it all depends on workload/index characteristics, in the\n> end. It is very rare to delete a percentage of the index that exceeds\n> autovacuum_vacuum_scale_factor -- that's the important thing here IMV.\n>\n> > Another idea I come up with (maybe on top of above your idea) is to\n> > change btm_oldest_btpo_xact to 64-bit XID and store the *newest*\n> > btpo.xact XID among all deleted pages when the total amount of deleted\n> > pages exceeds 2% of index. That way, we surely can recycle more than\n> > 2% of index when the XID becomes older than the global xmin.\n>\n> You could make my basic approach to recycling deleted pages earlier\n> (ideally at the end of the same btvacuumscan() that deleted the pages\n> in the first place) more sophisticated in a variety of ways. These are\n> all subject to diminishing returns, though.\n>\n> I've already managed to recycle close to 100% of all B-Tree pages\n> during the same VACUUM with a very simple approach -- at least if we\n> assume BenchmarkSQL is representative. It is hard to know how much\n> more effort can be justified. To be clear, I'm not saying that an\n> improved design cannot be justified now or in the future (BenchmarkSQL\n> is not like many workloads that people use Postgres for). I'm just\n> saying that I *don't know* where to draw the line. Any particular\n> place that we draw the line feels a little arbitrary to me. This\n> includes my own choice of the work_mem-limited BTPendingRecycle array.\n> My patch currently works that way because it's simple -- no other\n> reason.\n>\n> Any scheme to further improve the \"work_mem-limited BTPendingRecycle\n> array\" design from my patch boils down to this: A new approach that\n> makes recycling of any remaining deleted pages take place \"before too\n> long\": After the end of the btvacuumscan() BTPendingRecycle array\n> stuff (presumably that didn't work out in cases where an improved\n> approach matters), but before the next VACUUM takes place (since that\n> will do the required recycling anyway, unless it's unable to do any\n> work at all, in which case it hardly matters).\n\nI agreed with this direction.\n\n> Here are two ideas of\n> my own in this same class as your idea:\n>\n> 1. Remember to do some of the BTPendingRecycle array FSM processing\n> stuff in btvacuumcleanup() -- defer some of the recycling of pages\n> recorded in BTPendingRecycle entries (paged deleted during\n> btbulkdelete() for the same VACUUM) until btvacuumcleanup() is called.\n>\n> Right now btvacuumcleanup() will always do nothing when btbulkdelete()\n> was called earlier. But that's just a current nbtree convention, and\n> is no reason to not do this (we don't have to scan the index again at\n> all). The advantage of teaching btvacuumcleanup() to do this is that\n> it delays the \"BTPendingRecycle array FSM processing\" stuff until the\n> last moment that it is still easy to use the in-memory array (because\n> we haven't freed it yet). In general, doing it later makes it more\n> likely that we'll successfully recycle the pages. Though in general it\n> might not make any difference -- so we're still hoping that the\n> workload allows us to recycle everything we deleted, without making\n> the design much more complicated than what I posted already.\n>\n> (BTW I see that you reviewed commit 4e514c61, so you must have thought\n> about the trade-off between doing deferred recycling in\n> amvacuumcleanup() vs ambulkdelete(), when to call\n> IndexFreeSpaceMapVacuum(), etc. But there is no reason why we cannot\n> implement this idea while calling IndexFreeSpaceMapVacuum() during\n> both btvacuumcleanup() and btbulkdelete(), so that we get the best of\n> both worlds -- fast recycling *and* more delayed processing that is\n> more likely to ultimately succeed.)\n\nI think this idea 1 also needs to serialize BTPendingRecycle array\nsomewhere to pass it to a parallel vacuum worker in parallel vacuum\ncase.\n\nDelaying the \"BTPendingRecycle array FSM processing\" stuff until\nbtvacuumcleanup() is a good idea. But I think it's a relatively rare\ncase in practice where index vacuum runs more than once (i.g., using\nup maintenance_work_mem). So considering the development cost of\nserializing BTPendingRecycle array and index AM API changes,\nattempting to recycle the deleted pages at the end of btvacuumscan()\nwould be a balanced strategy.\n\n>\n> 2. Remember/serialize the BTPendingRecycle array when we realize that\n> we cannot put all recyclable pages in the FSM at the end of the\n> current btvacuumscan(), and then use an autovacuum work item to\n> process them before too long -- a call to AutoVacuumRequestWork()\n> could even serialize the data on disk.\n>\n> Idea 2 has the advantage of allowing retries -- eventually it will be\n> safe to recycle the pages, if we just wait long enough.\n\nThis is a good idea too. Perhaps autovacuum needs to end up with an\nerror to retry later again in case where it could not recycle all\ndeleted pages.\n\nI have thought too about the idea to store pending-recycle pages\nsomewhere to avoid index scan when we do the XID-is-recyclable check.\nMy idea was to store them to btree pages dedicated for this purpose\nlinked from the meta page but I prefer your idea.\n\n>\n> Anyway, I'm probably not going to pursue either of the 2 ideas for\n> Postgres 14. I'm mentioning these ideas now because the trade-offs\n> show that there is no perfect design for this deferring recycling\n> stuff. Whatever we do, we should accept that there is no perfect\n> design.\n>\n> Actually, there is one more reason why I bring up idea 1 now: I want\n> to hear your thoughts on the index AM API questions now, which idea 1\n> touches on. Ideally all of the details around the index AM VACUUM APIs\n> (i.e. when and where the extra work happens -- btvacuumcleanup() vs\n> btbulkdelete()) won't need to change much in the future. I worry about\n> getting this index AM API stuff right, at least a little.\n\nAfter introducing parallel vacuum, index AMs are not able to pass\narbitary information taken in ambulkdlete() to amvacuumcleanup(), like\nold gist index code does. If there is a good use case where needs to\npass arbitary information to amvacuumcleanup(), I think it'd be a good\nidea to add an index AM API so that parallel vacuum serialize it and\ntells another parallel vacuum worker. But, as I mentinoed above, given\nthat vacuum calls ambulkdelete() only once in most cases and I think\nwe’d like to improve how to store TIDs in maintenance_work_mem space\n(discussed a little on thread[1]), delaying \"the BTPendingRecycle\narray FSM processing stuff in btvacuumcleanup()” would not be a good\nusecase.\n\n>\n> > Also, maybe we can record deleted pages to FSM even without deferring\n> > and check it when re-using. That is, when we get a free page from FSM\n> > we check if the page is really recyclable (maybe _bt_getbuf() already\n> > does this?). IOW, a deleted page can be recycled only when it's\n> > requested to be reused. If btpo.xact is 64-bit XID we never need to\n> > worry about the case where a deleted page never be requested to be\n> > reused.\n>\n> I've thought about that too (both now and in the past). You're right\n> about _bt_getbuf() -- it checks the XID, at least on the master\n> branch. I took that XID check out in v4 of the patch, but I am now\n> starting to have my doubts about that particular choice. (I'm probably\n> going to restore the XID check in _bt_getbuf in v5 of the patch.)\n>\n> I took the XID-is-recyclable check out in v4 of the patch because it\n> might leak pages in rare cases -- which is not a new problem.\n> _bt_getbuf() currently has a remarkably relaxed attitude about leaking\n> pages from the FSM (it is more relaxed about it than I am, certainly)\n> -- but why should we just accept leaking pages like that? My new\n> doubts about it are non-specific, though. We know that the FSM isn't\n> crash safe -- but I think that that reduces to \"practically speaking,\n> we can never 100% trust the FSM\". Which makes me nervous. I worry that\n> the FSM can do something completely evil and crazy in rare cases.\n>\n> It's not just crash safety. The FSM's fsm_search_avail() function\n> currently changes the fp_next_slot field with only a shared buffer\n> lock held. It's an int, which is supposed to \"be atomic on most\n> platforms\". But we should be using real atomic ops. So the FSM is\n> generally...kind of wonky.\n>\n> In an ideal world, nbtree page deletion + recycling would have crash\n> safety built in. I don't think that it makes sense to not have free\n> space management without crash safety in the case of index AMs,\n> because it's just not worth it with whole-page units of free space\n> (heapam is another story). A 100% crash-safe design would naturally\n> shift the problem of nbtree page recycle safety from the\n> producer/VACUUM side, to the consumer/_bt_getbuf() side, which I agree\n> would be a real improvement. But these long standing FSM issues are\n> not going to change for Postgres 14. And so changing _bt_getbuf() to\n> do clever things with XIDs won't be possible for Postgres 14 IMV.\n\nAgreed. Thanks for your explanation.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/CA%2Bfd4k76j8jKzJzcx8UqEugvayaMSnQz0iLUt_XgBp-_-bd22A%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 18 Feb 2021 20:13:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Thu, Feb 18, 2021 at 3:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Agreed. Thanks for your explanation.\n\nAttached is v5, which has some of the changes I talked about. Changes\nfrom v4 include:\n\n* Now only updates metapage during btvacuumcleanup() in the first\npatch, which is enough to fix the existing\nIndexVacuumInfo.num_heap_tuples issue.\n\n* Restored _bt_getbuf() page-from-FSM XID check. Out of sheer paranoia.\n\n* The second patch in the series now respects work_mem when sizing the\nBTPendingRecycle array.\n\n* New enhancement to the XID GlobalVisCheckRemovableFullXid() test\nused in the second patch, to allow it to recycle even more pages.\n(Still unsure of some of the details here.)\n\nI would like to commit the first patch in a few days -- I refer to the\nbig patch that makes deleted page XIDs 64-bit/full. Can you take a\nlook at that one, Masahiko? That would be helpful. I can produce a bug\nfix for the IndexVacuumInfo.num_heap_tuples issue fairly easily, but I\nthink that that should be written after the first patch is finalized\nand committed.\n\nThe second patch (the new recycling optimization) will require more\nwork and testing.\n\nThanks!\n-- \nPeter Geoghegan", "msg_date": "Thu, 18 Feb 2021 22:12:01 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 19, 2021 at 3:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Feb 18, 2021 at 3:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Agreed. Thanks for your explanation.\n>\n> Attached is v5, which has some of the changes I talked about. Changes\n> from v4 include:\n>\n> * Now only updates metapage during btvacuumcleanup() in the first\n> patch, which is enough to fix the existing\n> IndexVacuumInfo.num_heap_tuples issue.\n>\n> * Restored _bt_getbuf() page-from-FSM XID check. Out of sheer paranoia.\n>\n> * The second patch in the series now respects work_mem when sizing the\n> BTPendingRecycle array.\n>\n> * New enhancement to the XID GlobalVisCheckRemovableFullXid() test\n> used in the second patch, to allow it to recycle even more pages.\n> (Still unsure of some of the details here.)\n\nThank you for updating the patch!\n\n>\n> I would like to commit the first patch in a few days -- I refer to the\n> big patch that makes deleted page XIDs 64-bit/full. Can you take a\n> look at that one, Masahiko? That would be helpful. I can produce a bug\n> fix for the IndexVacuumInfo.num_heap_tuples issue fairly easily, but I\n> think that that should be written after the first patch is finalized\n> and committed.\n\nI'll look at the first patch first.\n\n>\n> The second patch (the new recycling optimization) will require more\n> work and testing.\n\nThen also look at those patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 19 Feb 2021 15:18:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 19, 2021 at 3:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Feb 19, 2021 at 3:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Thu, Feb 18, 2021 at 3:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > Agreed. Thanks for your explanation.\n> >\n> > Attached is v5, which has some of the changes I talked about. Changes\n> > from v4 include:\n> >\n> > * Now only updates metapage during btvacuumcleanup() in the first\n> > patch, which is enough to fix the existing\n> > IndexVacuumInfo.num_heap_tuples issue.\n> >\n> > * Restored _bt_getbuf() page-from-FSM XID check. Out of sheer paranoia.\n> >\n> > * The second patch in the series now respects work_mem when sizing the\n> > BTPendingRecycle array.\n> >\n> > * New enhancement to the XID GlobalVisCheckRemovableFullXid() test\n> > used in the second patch, to allow it to recycle even more pages.\n> > (Still unsure of some of the details here.)\n>\n> Thank you for updating the patch!\n>\n> >\n> > I would like to commit the first patch in a few days -- I refer to the\n> > big patch that makes deleted page XIDs 64-bit/full. Can you take a\n> > look at that one, Masahiko? That would be helpful. I can produce a bug\n> > fix for the IndexVacuumInfo.num_heap_tuples issue fairly easily, but I\n> > think that that should be written after the first patch is finalized\n> > and committed.\n>\n> I'll look at the first patch first.\n\nThe 0001 patch looks good to me. In the documentation, I think we need\nto update the following paragraph in the description of\nvacuum_cleanup_index_scale_factor:\n\nIf no tuples were deleted from the heap, B-tree indexes are still\nscanned at the VACUUM cleanup stage when at least one of the following\nconditions is met: the index statistics are stale, or the index\ncontains deleted pages that can be recycled during cleanup. Index\nstatistics are considered to be stale if the number of newly inserted\ntuples exceeds the vacuum_cleanup_index_scale_factor fraction of the\ntotal number of heap tuples detected by the previous statistics\ncollection. The total number of heap tuples is stored in the index\nmeta-page. Note that the meta-page does not include this data until\nVACUUM finds no dead tuples, so B-tree index scan at the cleanup stage\ncan only be skipped if the second and subsequent VACUUM cycles detect\nno dead tuples.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 22 Feb 2021 21:20:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Feb 22, 2021 at 4:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> The 0001 patch looks good to me. In the documentation, I think we need\n> to update the following paragraph in the description of\n> vacuum_cleanup_index_scale_factor:\n\nGood point. I think that the structure should make the page deletion\ntriggering condition have only secondary importance -- it is only\ndescribed at all to be complete and exhaustive. The\nvacuum_cleanup_index_scale_factor-related threshold is all that users\nwill really care about in this area.\n\nThe reasons for this are: it's pretty rare to have many page\ndeletions, but never again delete/non-hot update even one single\ntuple. But when that happens, it's *much* rarer still to *also* have\ninserts, that might actually benefit from recycling the deleted page.\nSo it's very narrow.\n\nI think that I'll add a \"Note\" box that talks about the page deletion\nstuff, right at the end. It's actually kind of an awkward thing to\ndescribe, and yet I think we still need to describe it.\n\nI also think that the existing documentation should clearly point out\nthat the vacuum_cleanup_index_scale_factor only gets considered when\nthere are no updates or deletes since the last VACUUM -- that seems\nlike an existing problem worth fixing now. It's way too unclear that\nthis setting only really concerns append-only tables.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 22 Feb 2021 14:54:54 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Feb 23, 2021 at 7:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Feb 22, 2021 at 4:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > The 0001 patch looks good to me. In the documentation, I think we need\n> > to update the following paragraph in the description of\n> > vacuum_cleanup_index_scale_factor:\n>\n> Good point. I think that the structure should make the page deletion\n> triggering condition have only secondary importance -- it is only\n> described at all to be complete and exhaustive. The\n> vacuum_cleanup_index_scale_factor-related threshold is all that users\n> will really care about in this area.\n>\n> The reasons for this are: it's pretty rare to have many page\n> deletions, but never again delete/non-hot update even one single\n> tuple. But when that happens, it's *much* rarer still to *also* have\n> inserts, that might actually benefit from recycling the deleted page.\n> So it's very narrow.\n>\n> I think that I'll add a \"Note\" box that talks about the page deletion\n> stuff, right at the end. It's actually kind of an awkward thing to\n> describe, and yet I think we still need to describe it.\n\nYeah, triggering btvacuumscan() by having many deleted index pages\nwill become a rare case. Users are unlikely to experience it in\npractice. But it's still worth describing it.\n\n>\n> I also think that the existing documentation should clearly point out\n> that the vacuum_cleanup_index_scale_factor only gets considered when\n> there are no updates or deletes since the last VACUUM -- that seems\n> like an existing problem worth fixing now. It's way too unclear that\n> this setting only really concerns append-only tables.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 23 Feb 2021 10:23:53 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Feb 22, 2021 at 02:54:54PM -0800, Peter Geoghegan wrote:\n> On Mon, Feb 22, 2021 at 4:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > The 0001 patch looks good to me. In the documentation, I think we need\n> > to update the following paragraph in the description of\n> > vacuum_cleanup_index_scale_factor:\n> \n> Good point. I think that the structure should make the page deletion\n> triggering condition have only secondary importance -- it is only\n> described at all to be complete and exhaustive. The\n> vacuum_cleanup_index_scale_factor-related threshold is all that users\n> will really care about in this area.\n> \n> The reasons for this are: it's pretty rare to have many page\n> deletions, but never again delete/non-hot update even one single\n> tuple. But when that happens, it's *much* rarer still to *also* have\n> inserts, that might actually benefit from recycling the deleted page.\n> So it's very narrow.\n> \n> I think that I'll add a \"Note\" box that talks about the page deletion\n> stuff, right at the end. It's actually kind of an awkward thing to\n> describe, and yet I think we still need to describe it.\n> \n> I also think that the existing documentation should clearly point out\n> that the vacuum_cleanup_index_scale_factor only gets considered when\n> there are no updates or deletes since the last VACUUM -- that seems\n> like an existing problem worth fixing now. It's way too unclear that\n> this setting only really concerns append-only tables.\n\ne5d8a999030418a1b9e53d5f15ccaca7ed674877\n| I (pgeoghegan) have chosen to remove any mention of deleted pages in the\n| documentation of the vacuum_cleanup_index_scale_factor GUC/param, since\n| the presence of deleted (though unrecycled) pages is no longer of much\n| concern to users. The vacuum_cleanup_index_scale_factor description in\n| the docs now seems rather unclear in any case, and it should probably be\n| rewritten in the near future. Perhaps some passing mention of page\n| deletion will be added back at the same time.\n\nI think 8e12f4a25 wasn't quite aggressive enough in its changes, and I had\nanother patch laying around. I rebased and came up with this.\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 9851ca68b4..5da2e705b9 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -8522,24 +8522,26 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n </term>\n <listitem>\n <para>\n Specifies [-the fraction-]{+a multiplier+} of the total number of heap tuples[-counted in-]\n[- the previous statistics collection-] that can be\n inserted [-without-]{+before+} incurring an index scan at the <command>VACUUM</command>\n cleanup stage.\n This setting currently applies to B-tree indexes only.\n </para>\n\n <para>\n [-If-]{+During <command>VACUUM</command>, if there are+} no {+dead+} tuples [-were deleted from-]{+found while+}\n{+ scanning+} the heap, [-B-tree-]{+then the index vacuum phase is skipped.+}\n{+ However,+} indexes [-are-]{+might+} still {+be+} scanned [-at-]{+during+} the[-<command>VACUUM</command>-] cleanup [-stage when-]{+phase. Setting this+}\n{+ parameter enables+} the [-index's-]{+possibility to skip scanning indexes during cleanup.+}\n{+ Indexes will always be scanned when their+} statistics are stale.\n Index statistics are considered {+to be+} stale if the number of newly\n inserted tuples exceeds the <varname>vacuum_cleanup_index_scale_factor</varname>\n [-fraction-]{+multiplier+} of the total number of heap tuples [-detected by-]{+at the time of+} the previous\n [-statistics collection.-]{+vacuum cleanup.+} The total number of heap tuples is stored in\n the index meta-page. Note that the meta-page does not include this data\n until <command>VACUUM</command> finds no dead tuples, so B-tree index\n [-scan-]{+scans+} at the cleanup stage [-can only-]{+cannot+} be skipped [-if the second and-]\n[- subsequent <command>VACUUM</command> cycles detect-]{+until after a vacuum cycle+}\n{+ which detects+} no dead tuples.\n </para>\n\n <para>", "msg_date": "Wed, 24 Feb 2021 22:13:52 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Feb 22, 2021 at 2:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Good point. I think that the structure should make the page deletion\n> triggering condition have only secondary importance -- it is only\n> described at all to be complete and exhaustive. The\n> vacuum_cleanup_index_scale_factor-related threshold is all that users\n> will really care about in this area.\n\nI pushed the main 64-bit XID commit just now. Thanks!\n\nAttached is v6, with the two remaining patches. No real changes. Just\nwant to keep CFBot happy.\n\nI would like to talk about vacuum_cleanup_index_scale_factor some\nmore. I didn't get very far with the vacuum_cleanup_index_scale_factor\ndocumentation (I just removed the existing references to page\ndeletion). When I was working on the docs I suddenly wondered: is\nvacuum_cleanup_index_scale_factor actually necessary? Can we not get\nrid of it completely?\n\nThe amvacuumcleanup docs seems to suggest that that would be okay:\n\n\"It is OK to return NULL if the index was not changed at all during\nthe VACUUM operation, but otherwise correct stats should be returned.\"\n\nCurrently, _bt_vacuum_needs_cleanup() gets to decide whether or not\nthe index will change during VACUUM (assuming no deleted pages in the\ncase of Postgres 11 - 13, or assuming less than ~5% on Postgres 14).\nSo why even bother with the heap tuple stuff at all? Why not simply\nremove the triggering logic that uses btm_last_cleanup_num_heap_tuples\n+ vacuum_cleanup_index_scale_factor completely? We can rely on ANALYZE\nto set pg_class.reltuples/pg_class.relpages instead. IIUC this is 100%\nallowed by the amvacuumcleanup contract.\n\nI think that the original design that made VACUUM set\npg_class.reltuples/pg_class.relpages in indexes (from 15+ years ago)\nassumed that it was cheap to handle statistics in passing -- the\nmarginal cost was approximately zero, so why not just do it? It was\nnot because VACUUM thinks it is valuable or urgent, and yet\nvacuum_cleanup_index_scale_factor seems to assume that it must.\n\nOf course, it may actually be hard/expensive to update the statistics\ndue to the vacuum_cleanup_index_scale_factor stuff that was added to\nPostgres 11. The autovacuum_vacuum_insert_threshold stuff that was\nadded to Postgres 13 also seems quite relevant. So I think that there\nis an inconsistency here.\n\nI can see one small problem with my plan of relying on ANALYZE to do\nthis: VACUUM ANALYZE trusts amvacuumcleanup/btvacuumcleanup (when\ncalled by lazyvacuum.c) to set pg_class.reltuples/pg_class.relpages\nwithin do_analyze_rel() -- even when amvacuumcleanup/btvacuumcleanup\nreturns NULL:\n\n /*\n * Same for indexes. Vacuum always scans all indexes, so if we're part of\n * VACUUM ANALYZE, don't overwrite the accurate count already inserted by\n * VACUUM.\n */\n if (!inh && !(params->options & VACOPT_VACUUM))\n {\n for (ind = 0; ind < nindexes; ind++)\n {\n AnlIndexData *thisdata = &indexdata[ind];\n double totalindexrows;\n\n totalindexrows = ceil(thisdata->tupleFract * totalrows);\n vac_update_relstats(Irel[ind],\n RelationGetNumberOfBlocks(Irel[ind]),\n totalindexrows,\n 0,\n false,\n InvalidTransactionId,\n InvalidMultiXactId,\n in_outer_xact);\n }\n }\n\nBut this just seems like a very old bug to me. This bug can be fixed\nseparately by teaching VACUUM ANALYZE to recognize cases where indexes\ndid not have their stats updated in the way it expects.\n\nBTW, note that btvacuumcleanup set pg_class.reltuples to 0 in all\ncases following the deduplication commit until my bug fix commit\n48e12913 (which was kind of a hack itself). This meant that the\nstatistics set by btvacuumcleanup (in the case where btbulkdelete\ndoesn't get called, the relevant case for\nvacuum_cleanup_index_scale_factor). So it was 100% wrong for months\nbefore anybody noticed (or at least until anybody complained).\n\nAm I missing something here?\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 24 Feb 2021 20:42:15 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Feb 24, 2021 at 8:13 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I think 8e12f4a25 wasn't quite aggressive enough in its changes, and I had\n> another patch laying around. I rebased and came up with this.\n\nSee my remarks/questions about vacuum_cleanup_index_scale_factor\naddressed to Masahiko from a little earlier. I think that it might\nmake sense to just remove it. It might even make sense to disable it\nin the backbranches -- that approach might be better than trying to\nfix the \"IndexVacuumInfo.num_heap_tuples is only representative of the\nheap relation at the end of the VACUUM when considered within\nbtvacuumcleanup()\" bug. (Though I'm less confident on this second\npoint about a backpatchable fix.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 24 Feb 2021 21:21:54 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Thu, Feb 25, 2021 at 1:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Feb 22, 2021 at 2:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Good point. I think that the structure should make the page deletion\n> > triggering condition have only secondary importance -- it is only\n> > described at all to be complete and exhaustive. The\n> > vacuum_cleanup_index_scale_factor-related threshold is all that users\n> > will really care about in this area.\n>\n> I pushed the main 64-bit XID commit just now. Thanks!\n\nAwesome!\n\n>\n> Attached is v6, with the two remaining patches. No real changes. Just\n> want to keep CFBot happy.\n\nThank you for updating the patch. I'll have a look at them.\n\n>\n> I would like to talk about vacuum_cleanup_index_scale_factor some\n> more. I didn't get very far with the vacuum_cleanup_index_scale_factor\n> documentation (I just removed the existing references to page\n> deletion). When I was working on the docs I suddenly wondered: is\n> vacuum_cleanup_index_scale_factor actually necessary? Can we not get\n> rid of it completely?\n>\n> The amvacuumcleanup docs seems to suggest that that would be okay:\n>\n> \"It is OK to return NULL if the index was not changed at all during\n> the VACUUM operation, but otherwise correct stats should be returned.\"\n>\n> Currently, _bt_vacuum_needs_cleanup() gets to decide whether or not\n> the index will change during VACUUM (assuming no deleted pages in the\n> case of Postgres 11 - 13, or assuming less than ~5% on Postgres 14).\n> So why even bother with the heap tuple stuff at all? Why not simply\n> remove the triggering logic that uses btm_last_cleanup_num_heap_tuples\n> + vacuum_cleanup_index_scale_factor completely? We can rely on ANALYZE\n> to set pg_class.reltuples/pg_class.relpages instead. IIUC this is 100%\n> allowed by the amvacuumcleanup contract.\n>\n> I think that the original design that made VACUUM set\n> pg_class.reltuples/pg_class.relpages in indexes (from 15+ years ago)\n> assumed that it was cheap to handle statistics in passing -- the\n> marginal cost was approximately zero, so why not just do it? It was\n> not because VACUUM thinks it is valuable or urgent, and yet\n> vacuum_cleanup_index_scale_factor seems to assume that it must.\n>\n> Of course, it may actually be hard/expensive to update the statistics\n> due to the vacuum_cleanup_index_scale_factor stuff that was added to\n> Postgres 11. The autovacuum_vacuum_insert_threshold stuff that was\n> added to Postgres 13 also seems quite relevant. So I think that there\n> is an inconsistency here.\n\nbtvacuumcleanup() has been playing two roles: recycling deleted pages\nand collecting index statistics. Before introducing\nvacuum_cleanup_index_scale_factor, btvacuumcleanup() always scanned\nthe index for both purpose. So it was a problem that we do an index\nscan when anti-wraparound vacuum even if the table has not been\nchanged at all. The motivation of vacuum_cleanup_index_scale_factor is\nto decrease the frequency of collecting index statistics (but not to\neliminate it). Since deleted pages could be left by btvacuumcleanup()\nskipping an index scan, we introduced btm_oldest_btpo_xact (and it\nbecame unnecessary by commit e5d8a99903).\n\nIf we don't want btvacuumcleanup() to collect index statistics, we can\nremove vacuum_cleanup_index_scale_factor (at least from btree\nperspectives), as you mentioned. One thing that may be worth\nmentioning is that the difference between the index statistics taken\nby ANALYZE and btvacuumcleanup() is that the former statistics is\nalways an estimation. That’s calculated by compute_index_stats()\nwhereas the latter uses the result of an index scan. If\nbtvacuumcleanup() doesn’t scan the index and always returns NULL, it\nwould become hard to get accurate index statistics, for example in a\nstatic table case. I've not checked which cases index statistics\ncalculated by compute_index_stats() are inaccurate, though.\n\n>\n> I can see one small problem with my plan of relying on ANALYZE to do\n> this: VACUUM ANALYZE trusts amvacuumcleanup/btvacuumcleanup (when\n> called by lazyvacuum.c) to set pg_class.reltuples/pg_class.relpages\n> within do_analyze_rel() -- even when amvacuumcleanup/btvacuumcleanup\n> returns NULL:\n>\n> /*\n> * Same for indexes. Vacuum always scans all indexes, so if we're part of\n> * VACUUM ANALYZE, don't overwrite the accurate count already inserted by\n> * VACUUM.\n> */\n> if (!inh && !(params->options & VACOPT_VACUUM))\n> {\n> for (ind = 0; ind < nindexes; ind++)\n> {\n> AnlIndexData *thisdata = &indexdata[ind];\n> double totalindexrows;\n>\n> totalindexrows = ceil(thisdata->tupleFract * totalrows);\n> vac_update_relstats(Irel[ind],\n> RelationGetNumberOfBlocks(Irel[ind]),\n> totalindexrows,\n> 0,\n> false,\n> InvalidTransactionId,\n> InvalidMultiXactId,\n> in_outer_xact);\n> }\n> }\n>\n> But this just seems like a very old bug to me. This bug can be fixed\n> separately by teaching VACUUM ANALYZE to recognize cases where indexes\n> did not have their stats updated in the way it expects.\n\nAccording to the doc, if amvacuumcleanup/btvacuumcleanup returns NULL,\nit means the index is not changed at all. So do_analyze_rel() executed\nby VACUUM ANALYZE also doesn't need to update the index statistics\neven when amvacuumcleanup/btvacuumcleanup returns NULL. No?\n\n>\n> BTW, note that btvacuumcleanup set pg_class.reltuples to 0 in all\n> cases following the deduplication commit until my bug fix commit\n> 48e12913 (which was kind of a hack itself). This meant that the\n> statistics set by btvacuumcleanup (in the case where btbulkdelete\n> doesn't get called, the relevant case for\n> vacuum_cleanup_index_scale_factor). So it was 100% wrong for months\n> before anybody noticed (or at least until anybody complained).\n>\n\nMaybe we need more regression tests here.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 25 Feb 2021 22:42:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Thu, Feb 25, 2021 at 5:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> btvacuumcleanup() has been playing two roles: recycling deleted pages\n> and collecting index statistics.\n\nRight.\n\nI pushed the VACUUM VERBOSE \"index pages newly deleted\"\ninstrumentation patch earlier - it really isn't complicated or\ncontroversial, so I saw no reason to delay with that.\n\nAttached is v7, which now only has the final patch -- the optimization\nthat makes it possible for VACUUM to recycle pages that were newly\ndeleted during the same VACUUM operation. Still no real changes.\nAgain, I just wanted to keep CFBot happy. I haven't thought about or\nimproved this final patch recently, and it clearly needs more work to\nbe ready to commit.\n\n> If we don't want btvacuumcleanup() to collect index statistics, we can\n> remove vacuum_cleanup_index_scale_factor (at least from btree\n> perspectives), as you mentioned. One thing that may be worth\n> mentioning is that the difference between the index statistics taken\n> by ANALYZE and btvacuumcleanup() is that the former statistics is\n> always an estimation. That’s calculated by compute_index_stats()\n> whereas the latter uses the result of an index scan. If\n> btvacuumcleanup() doesn’t scan the index and always returns NULL, it\n> would become hard to get accurate index statistics, for example in a\n> static table case. I've not checked which cases index statistics\n> calculated by compute_index_stats() are inaccurate, though.\n\nThe historic context makes it easier to understand what to do here --\nit makes it clear that amvacuumcleanup() routine does not (or should\nnot) do any index scan when the index hasn't (and won't) be modified\nby the current VACUUM operation. The relevant sgml doc sentence I\nquoted to you recently (\"It is OK to return NULL if the index was not\nchanged at all during the VACUUM operation...\") was added by commit\ne57345975cf in 2006. Much of the relevant 2006 discussion is here,\nFWIW:\n\nhttps://www.postgresql.org/message-id/flat/26433.1146598265%40sss.pgh.pa.us#862ee11c24da63d0282e0025abbad19c\n\nSo now we have the formal rules for index AMs, as well as background\ninformation about what various hackers (mostly Tom) were considering\nwhen the rules were written.\n\n> According to the doc, if amvacuumcleanup/btvacuumcleanup returns NULL,\n> it means the index is not changed at all. So do_analyze_rel() executed\n> by VACUUM ANALYZE also doesn't need to update the index statistics\n> even when amvacuumcleanup/btvacuumcleanup returns NULL. No?\n\nConsider hashvacuumcleanup() -- here it is in full (it hasn't really\nchanged since 2006, when it was updated by that same commit I cited):\n\n/*\n * Post-VACUUM cleanup.\n *\n * Result: a palloc'd struct containing statistical info for VACUUM displays.\n */\nIndexBulkDeleteResult *\nhashvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)\n{\n Relation rel = info->index;\n BlockNumber num_pages;\n\n /* If hashbulkdelete wasn't called, return NULL signifying no change */\n /* Note: this covers the analyze_only case too */\n if (stats == NULL)\n return NULL;\n\n /* update statistics */\n num_pages = RelationGetNumberOfBlocks(rel);\n stats->num_pages = num_pages;\n\n return stats;\n}\n\nClearly hashvacuumcleanup() was considered by Tom when he revised the\ndocumentation in 2006. Here are some observations about\nhashvacuumcleanup() that seem relevant now:\n\n* There is no \"analyze_only\" handling, just like nbtree.\n\n\"analyze_only\" is only used by GIN, even now, 15+ years after it was\nadded. GIN uses it to make autovacuum workers (never VACUUM outside of\nan AV worker) do pending list insertions for ANALYZE -- just to make\nit happen more often. This is a niche thing -- clearly we don't have\nto care about it in nbtree, even if we make btvacuumcleanup() (almost)\nalways return NULL when there was no btbulkdelete() call.\n\n* num_pages (which will become pg_class.relpages for the index) is not\nset when we return NULL -- hashvacuumcleanup() assumes that ANALYZE\nwill get to it eventually in the case where VACUUM does no real work\n(when it just returns NULL).\n\n* We also use RelationGetNumberOfBlocks() to set pg_class.relpages for\nindex relations during ANALYZE -- it's called when we call\nvac_update_relstats() (I quoted this do_analyze_rel() code to you\ndirectly in a recent email).\n\n* In general, pg_class.relpages isn't an estimate (because we use\nRelationGetNumberOfBlocks(), both in the VACUUM-updates case and the\nANALYZE-updates case) -- only pg_class.reltuples is truly an estimate\nduring ANALYZE, and so getting a \"true count\" seems to have only\nlimited practical importance.\n\nI think that this sets a precedent in support of my view that we can\nsimply get rid of vacuum_cleanup_index_scale_factor without any\nspecial effort to maintain pg_class.reltuples. As I said before, we\ncan safely make btvacuumcleanup() just like hashvacuumcleanup(),\nexcept when there are known deleted-but-not-recycled pages, where a\nfull index scan really is necessary for reasons that are not related\nto statistics at all (of course we still need the *logic* that was\nadded to nbtree by the vacuum_cleanup_index_scale_factor commit --\nthat is clearly necessary). My guess is that Tom would have made\nbtvacuumcleanup() look identical to hashvacuumcleanup() in 2006 if\nnbtree didn't have page deletion to consider -- but that had to be\nconsidered.\n\nMy reasoning here is also based on the tendency of the core code to\nmostly think of hash indexes as very similar to nbtree indexes.\n\nEven though \"the letter of the law\" favors removing the\nvacuum_cleanup_index_scale_factor GUC + param in the way I have\noutlined, that is not the only thing that matters -- we must also\nconsider \"the spirit of the law\". Realistically, hash indexes are far\nless popular than nbtree indexes, and so even if I am 100% correct in\ntheory, the real world might not be so convinced by my legalistic\nargument. We've already seen the issue with VACUUM ANALYZE (which has\nnot been truly consistent with the behavior hashvacuumcleanup() for\nmany years). There might be more.\n\nI suppose I could ask Tom what he thinks? The hardest question is what\nto do in the backbranches...I really don't have a strong opinion right\nnow.\n\n> > BTW, note that btvacuumcleanup set pg_class.reltuples to 0 in all\n> > cases following the deduplication commit until my bug fix commit\n> > 48e12913 (which was kind of a hack itself). This meant that the\n> > statistics set by btvacuumcleanup (in the case where btbulkdelete\n> > doesn't get called, the relevant case for\n> > vacuum_cleanup_index_scale_factor). So it was 100% wrong for months\n> > before anybody noticed (or at least until anybody complained).\n> >\n>\n> Maybe we need more regression tests here.\n\nI agree, but my point was that even a 100% broken approach to stats\nwithin btvacuumcleanup() is not that noticeable. This supports the\nidea that it just doesn't matter very much if a cleanup-only scan of\nthe index never takes place (or only takes place when we need to\nrecycle deleted pages, which is generally rare but will become very\nrare once I commit the attached patch).\n\nAlso, my fix for this bug (commit 48e12913) was actually pretty bad;\nthere are now cases where the btvacuumcleanup()-only VACUUM case will\nset pg_class.reltuples to a value that is significantly below what it\nshould be (it all depends on how effective deduplication is with the\ndata). I probably should have made btvacuumcleanup()-only VACUUMs set\n\"stats->estimate_count = true\", purely to make sure that the core code\ndoesn't trust the statistics too much (it's okay for VACUUM VERBOSE\noutput only). Right now we can get a pg_class.reltuples that is\n\"exactly wrong\" -- it would actually be a big improvement if it was\n\"approximately correct\".\n\nAnother new concern for me (another concern unique to Postgres 13) is\nautovacuum_vacuum_insert_scale_factor-driven autovacuums.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 25 Feb 2021 16:58:30 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 26, 2021 at 9:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Feb 25, 2021 at 5:42 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > btvacuumcleanup() has been playing two roles: recycling deleted pages\n> > and collecting index statistics.\n>\n> Right.\n>\n> I pushed the VACUUM VERBOSE \"index pages newly deleted\"\n> instrumentation patch earlier - it really isn't complicated or\n> controversial, so I saw no reason to delay with that.\n\nThanks!\n\nI think we can improve bloom indexes in a separate patch so that they\nuse pages_newly_deleted.\n\n>\n> Attached is v7, which now only has the final patch -- the optimization\n> that makes it possible for VACUUM to recycle pages that were newly\n> deleted during the same VACUUM operation. Still no real changes.\n> Again, I just wanted to keep CFBot happy. I haven't thought about or\n> improved this final patch recently, and it clearly needs more work to\n> be ready to commit.\n\nI've looked at the patch. The patch is straightforward and I agreed\nwith the direction.\n\nHere are some comments on v7 patch.\n\n---\n+ /* Allocate _bt_newly_deleted_pages_recycle related information */\n+ vstate.ndeletedspace = 512;\n\nMaybe add a #define for the value 512?\n\n----\n+ for (int i = 0; i < vstate->ndeleted; i++)\n+ {\n+ BlockNumber blkno = vstate->deleted[i].blkno;\n+ FullTransactionId safexid = vstate->deleted[i].safexid;\n+\n+ if (!GlobalVisCheckRemovableFullXid(heapRel, safexid))\n+ break;\n+\n+ RecordFreeIndexPage(rel, blkno);\n+ stats->pages_free++;\n+ }\n\nShould we use 'continue' instead of 'break'? Or can we sort\nvstate->deleted array by full XID and leave 'break'?\n\n---\nCurrently, the patch checks only newly-deleted-pages if they are\nrecyclable at the end of btvacuumscan. What do you think about the\nidea of checking also pages that are deleted by previous vacuums\n(i.g., pages already marked P_ISDELETED() but not\nBTPageIsRecyclable())? There is still a little hope that such pages\nbecome recyclable when we reached the end of btvacuumscan. We will end\nup checking such pages twice (during btvacuumscan() and the end of\nbtvacuumscan()) but if the cost of collecting and checking pages is\nnot high it probably could expand the chance of recycling pages.\n\nI'm going to reply to the discussion vacuum_cleanup_index_scale_factor\nin a separate mail. Or maybe it's better to start a new thread for\nthat so as get opinions from other hackers. It's no longer related to\nthe subject.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 26 Feb 2021 17:03:37 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Fri, Feb 26, 2021 at 9:58 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > If we don't want btvacuumcleanup() to collect index statistics, we can\n> > remove vacuum_cleanup_index_scale_factor (at least from btree\n> > perspectives), as you mentioned. One thing that may be worth\n> > mentioning is that the difference between the index statistics taken\n> > by ANALYZE and btvacuumcleanup() is that the former statistics is\n> > always an estimation. That’s calculated by compute_index_stats()\n> > whereas the latter uses the result of an index scan. If\n> > btvacuumcleanup() doesn’t scan the index and always returns NULL, it\n> > would become hard to get accurate index statistics, for example in a\n> > static table case. I've not checked which cases index statistics\n> > calculated by compute_index_stats() are inaccurate, though.\n>\n> The historic context makes it easier to understand what to do here --\n> it makes it clear that amvacuumcleanup() routine does not (or should\n> not) do any index scan when the index hasn't (and won't) be modified\n> by the current VACUUM operation. The relevant sgml doc sentence I\n> quoted to you recently (\"It is OK to return NULL if the index was not\n> changed at all during the VACUUM operation...\") was added by commit\n> e57345975cf in 2006. Much of the relevant 2006 discussion is here,\n> FWIW:\n>\n> https://www.postgresql.org/message-id/flat/26433.1146598265%40sss.pgh.pa.us#862ee11c24da63d0282e0025abbad19c\n>\n> So now we have the formal rules for index AMs, as well as background\n> information about what various hackers (mostly Tom) were considering\n> when the rules were written.\n>\n> > According to the doc, if amvacuumcleanup/btvacuumcleanup returns NULL,\n> > it means the index is not changed at all. So do_analyze_rel() executed\n> > by VACUUM ANALYZE also doesn't need to update the index statistics\n> > even when amvacuumcleanup/btvacuumcleanup returns NULL. No?\n>\n> Consider hashvacuumcleanup() -- here it is in full (it hasn't really\n> changed since 2006, when it was updated by that same commit I cited):\n>\n> /*\n> * Post-VACUUM cleanup.\n> *\n> * Result: a palloc'd struct containing statistical info for VACUUM displays.\n> */\n> IndexBulkDeleteResult *\n> hashvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)\n> {\n> Relation rel = info->index;\n> BlockNumber num_pages;\n>\n> /* If hashbulkdelete wasn't called, return NULL signifying no change */\n> /* Note: this covers the analyze_only case too */\n> if (stats == NULL)\n> return NULL;\n>\n> /* update statistics */\n> num_pages = RelationGetNumberOfBlocks(rel);\n> stats->num_pages = num_pages;\n>\n> return stats;\n> }\n>\n> Clearly hashvacuumcleanup() was considered by Tom when he revised the\n> documentation in 2006. Here are some observations about\n> hashvacuumcleanup() that seem relevant now:\n>\n> * There is no \"analyze_only\" handling, just like nbtree.\n>\n> \"analyze_only\" is only used by GIN, even now, 15+ years after it was\n> added. GIN uses it to make autovacuum workers (never VACUUM outside of\n> an AV worker) do pending list insertions for ANALYZE -- just to make\n> it happen more often. This is a niche thing -- clearly we don't have\n> to care about it in nbtree, even if we make btvacuumcleanup() (almost)\n> always return NULL when there was no btbulkdelete() call.\n>\n> * num_pages (which will become pg_class.relpages for the index) is not\n> set when we return NULL -- hashvacuumcleanup() assumes that ANALYZE\n> will get to it eventually in the case where VACUUM does no real work\n> (when it just returns NULL).\n>\n> * We also use RelationGetNumberOfBlocks() to set pg_class.relpages for\n> index relations during ANALYZE -- it's called when we call\n> vac_update_relstats() (I quoted this do_analyze_rel() code to you\n> directly in a recent email).\n>\n> * In general, pg_class.relpages isn't an estimate (because we use\n> RelationGetNumberOfBlocks(), both in the VACUUM-updates case and the\n> ANALYZE-updates case) -- only pg_class.reltuples is truly an estimate\n> during ANALYZE, and so getting a \"true count\" seems to have only\n> limited practical importance.\n>\n> I think that this sets a precedent in support of my view that we can\n> simply get rid of vacuum_cleanup_index_scale_factor without any\n> special effort to maintain pg_class.reltuples. As I said before, we\n> can safely make btvacuumcleanup() just like hashvacuumcleanup(),\n> except when there are known deleted-but-not-recycled pages, where a\n> full index scan really is necessary for reasons that are not related\n> to statistics at all (of course we still need the *logic* that was\n> added to nbtree by the vacuum_cleanup_index_scale_factor commit --\n> that is clearly necessary). My guess is that Tom would have made\n> btvacuumcleanup() look identical to hashvacuumcleanup() in 2006 if\n> nbtree didn't have page deletion to consider -- but that had to be\n> considered.\n\nMakes sense. If getting a true pg_class.reltuples is not important in\npractice, it seems not to need btvacuumcleanup() do an index scan for\ngetting statistics purpose.\n\n>\n> My reasoning here is also based on the tendency of the core code to\n> mostly think of hash indexes as very similar to nbtree indexes.\n>\n> Even though \"the letter of the law\" favors removing the\n> vacuum_cleanup_index_scale_factor GUC + param in the way I have\n> outlined, that is not the only thing that matters -- we must also\n> consider \"the spirit of the law\". Realistically, hash indexes are far\n> less popular than nbtree indexes, and so even if I am 100% correct in\n> theory, the real world might not be so convinced by my legalistic\n> argument. We've already seen the issue with VACUUM ANALYZE (which has\n> not been truly consistent with the behavior hashvacuumcleanup() for\n> many years). There might be more.\n>\n> I suppose I could ask Tom what he thinks?\n\n+1\n\n> The hardest question is what\n> to do in the backbranches...I really don't have a strong opinion right\n> now.\n\nSince it seems not a bug I personally think we don't need to do\nanything for back branches. But if we want not to trigger an index\nscan by vacuum_cleanup_index_scale_factor, we could change the default\nvalue to a high value (say, to 10000) so that it can skip an index\nscan in most cases.\n\n>\n> > > BTW, note that btvacuumcleanup set pg_class.reltuples to 0 in all\n> > > cases following the deduplication commit until my bug fix commit\n> > > 48e12913 (which was kind of a hack itself). This meant that the\n> > > statistics set by btvacuumcleanup (in the case where btbulkdelete\n> > > doesn't get called, the relevant case for\n> > > vacuum_cleanup_index_scale_factor). So it was 100% wrong for months\n> > > before anybody noticed (or at least until anybody complained).\n> > >\n> >\n> > Maybe we need more regression tests here.\n>\n> I agree, but my point was that even a 100% broken approach to stats\n> within btvacuumcleanup() is not that noticeable. This supports the\n> idea that it just doesn't matter very much if a cleanup-only scan of\n> the index never takes place (or only takes place when we need to\n> recycle deleted pages, which is generally rare but will become very\n> rare once I commit the attached patch).\n>\n> Also, my fix for this bug (commit 48e12913) was actually pretty bad;\n> there are now cases where the btvacuumcleanup()-only VACUUM case will\n> set pg_class.reltuples to a value that is significantly below what it\n> should be (it all depends on how effective deduplication is with the\n> data). I probably should have made btvacuumcleanup()-only VACUUMs set\n> \"stats->estimate_count = true\", purely to make sure that the core code\n> doesn't trust the statistics too much (it's okay for VACUUM VERBOSE\n> output only). Right now we can get a pg_class.reltuples that is\n> \"exactly wrong\" -- it would actually be a big improvement if it was\n> \"approximately correct\".\n\nUnderstood. Thank you for your explanation.\n\n>\n> Another new concern for me (another concern unique to Postgres 13) is\n> autovacuum_vacuum_insert_scale_factor-driven autovacuums.\n\nIIUC the purpose of autovacuum_vacuum_insert_scale_factor is\nvisibility map maintenance. And as per this discussion, it seems not\nnecessary to do an index scan in btvacuumcleanup() triggered by\nautovacuum_vacuum_insert_scale_factor.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 1 Mar 2021 13:07:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Sun, Feb 28, 2021 at 8:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Even though \"the letter of the law\" favors removing the\n> > vacuum_cleanup_index_scale_factor GUC + param in the way I have\n> > outlined, that is not the only thing that matters -- we must also\n> > consider \"the spirit of the law\".\n\n> > I suppose I could ask Tom what he thinks?\n>\n> +1\n\nAre you going to start a new thread, or should I?\n\n> Since it seems not a bug I personally think we don't need to do\n> anything for back branches. But if we want not to trigger an index\n> scan by vacuum_cleanup_index_scale_factor, we could change the default\n> value to a high value (say, to 10000) so that it can skip an index\n> scan in most cases.\n\nOne reason to remove vacuum_cleanup_index_scale_factor in the back\nbranches is that it removes any need to fix the\n\"IndexVacuumInfo.num_heap_tuples is inaccurate outside of\nbtvacuumcleanup-only VACUUMs\" bug -- it just won't matter if\nbtm_last_cleanup_num_heap_tuples is inaccurate anymore. (I am still\nnot sure about backpatch being a good idea, though.)\n\n> > Another new concern for me (another concern unique to Postgres 13) is\n> > autovacuum_vacuum_insert_scale_factor-driven autovacuums.\n>\n> IIUC the purpose of autovacuum_vacuum_insert_scale_factor is\n> visibility map maintenance. And as per this discussion, it seems not\n> necessary to do an index scan in btvacuumcleanup() triggered by\n> autovacuum_vacuum_insert_scale_factor.\n\nArguably the question of skipping scanning the index should have been\nconsidered by the autovacuum_vacuum_insert_scale_factor patch when it\nwas committed for Postgres 13 -- but it wasn't. There is a regression\nthat was tied to autovacuum_vacuum_insert_scale_factor in Postgres 13\nby Mark Callaghan, which I suspect is relevant:\n\nhttps://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html\n\nThe blog post says: \"Updates - To understand the small regression\nmentioned above for the l.i1 test (more CPU & write IO) I repeated the\ntest with 100M rows using 2 configurations: one disabled index\ndeduplication and the other disabled insert-triggered autovacuum.\nDisabling index deduplication had no effect and disabling\ninsert-triggered autovacuum resolves the regression.\"\n\nThis is quite specifically with an insert-only workload, with 4\nindexes (that's from memory, but I'm pretty sure it's 4). I think that\nthe failure to account for skipping index scans is probably the big\nproblem here. Scanning the heap to set VM bits is unlikely to be\nexpensive compared to the full index scans. An insert-only workload is\ngoing to find most of the heap blocks it scans to set VM bits in\nshared_buffers. Not so for the indexes.\n\nSo in Postgres 13 we have this autovacuum_vacuum_insert_scale_factor\nissue, in addition to the deduplication + btvacuumcleanup issue we\ntalked about (the problems left by my Postgres 13 bug fix commit\n48e12913). These two issues make removing\nvacuum_cleanup_index_scale_factor tempting, even in the back branches\n-- it might actually be the more conservative approach, at least for\nPostgres 13.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 Mar 2021 13:40:29 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Mar 1, 2021 at 1:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Since it seems not a bug I personally think we don't need to do\n> > anything for back branches. But if we want not to trigger an index\n> > scan by vacuum_cleanup_index_scale_factor, we could change the default\n> > value to a high value (say, to 10000) so that it can skip an index\n> > scan in most cases.\n>\n> One reason to remove vacuum_cleanup_index_scale_factor in the back\n> branches is that it removes any need to fix the\n> \"IndexVacuumInfo.num_heap_tuples is inaccurate outside of\n> btvacuumcleanup-only VACUUMs\" bug -- it just won't matter if\n> btm_last_cleanup_num_heap_tuples is inaccurate anymore. (I am still\n> not sure about backpatch being a good idea, though.)\n\nAttached is v8 of the patch series, which has new patches. No real\nchanges compared to v7 for the first patch, though.\n\nThere are now two additional prototype patches to remove the\nvacuum_cleanup_index_scale_factor GUC/param along the lines we've\ndiscussed. This requires teaching VACUUM ANALYZE about when to trust\nVACUUM cleanup to set the statistics (that's what v8-0002* does).\n\nThe general idea for VACUUM ANALYZE in v8-0002* is to assume that\ncleanup-only VACUUMs won't set the statistics accurately -- so we need\nto keep track of this during VACUUM (in case it's a VACUUM ANALYZE,\nwhich now needs to know if index vacuuming was \"cleanup only\" or not).\nThis is not a new thing for hash indexes -- they never did anything in\nthe cleanup-only case (hashvacuumcleanup() just returns NULL). And now\nnbtree does the same thing (usually). Not all AMs will, but the new\nassumption is much better than the one it replaces.\n\nI thought of another existing case that violated the faulty assumption\nmade by VACUUM ANALYZE (which v8-0002* fixes): VACUUM's INDEX_CLEANUP\nfeature (which was added to Postgres 12 by commit a96c41feec6) is\nanother case where VACUUM does nothing with indexes. VACUUM ANALYZE\nmistakenly considers that index vacuuming must have run and set the\npg_class statistics to an accurate value (more accurate than it is\ncapable of). But with INDEX_CLEANUP we won't even call\namvacuumcleanup().\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 1 Mar 2021 19:25:29 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Mar 2, 2021 at 6:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Feb 28, 2021 at 8:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > Even though \"the letter of the law\" favors removing the\n> > > vacuum_cleanup_index_scale_factor GUC + param in the way I have\n> > > outlined, that is not the only thing that matters -- we must also\n> > > consider \"the spirit of the law\".\n>\n> > > I suppose I could ask Tom what he thinks?\n> >\n> > +1\n>\n> Are you going to start a new thread, or should I?\n\nOk, I'll start a new thread soon.\n\n>\n> > Since it seems not a bug I personally think we don't need to do\n> > anything for back branches. But if we want not to trigger an index\n> > scan by vacuum_cleanup_index_scale_factor, we could change the default\n> > value to a high value (say, to 10000) so that it can skip an index\n> > scan in most cases.\n>\n> One reason to remove vacuum_cleanup_index_scale_factor in the back\n> branches is that it removes any need to fix the\n> \"IndexVacuumInfo.num_heap_tuples is inaccurate outside of\n> btvacuumcleanup-only VACUUMs\" bug -- it just won't matter if\n> btm_last_cleanup_num_heap_tuples is inaccurate anymore. (I am still\n> not sure about backpatch being a good idea, though.)\n\nI think that removing vacuum_cleanup_index_scale_factor in the back\nbranches would affect the existing installation much. It would be\nbetter to have btree indexes not use this parameter while not changing\nthe contents of meta page. That is, just remove the check related to\nvacuum_cleanup_index_scale_factor from _bt_vacuum_needs_cleanup(). And\nI personally prefer to fix the \"IndexVacuumInfo.num_heap_tuples is\ninaccurate outside of btvacuumcleanup-only VACUUMs\" bug separately.\n\n>\n> > > Another new concern for me (another concern unique to Postgres 13) is\n> > > autovacuum_vacuum_insert_scale_factor-driven autovacuums.\n> >\n> > IIUC the purpose of autovacuum_vacuum_insert_scale_factor is\n> > visibility map maintenance. And as per this discussion, it seems not\n> > necessary to do an index scan in btvacuumcleanup() triggered by\n> > autovacuum_vacuum_insert_scale_factor.\n>\n> Arguably the question of skipping scanning the index should have been\n> considered by the autovacuum_vacuum_insert_scale_factor patch when it\n> was committed for Postgres 13 -- but it wasn't. There is a regression\n> that was tied to autovacuum_vacuum_insert_scale_factor in Postgres 13\n> by Mark Callaghan, which I suspect is relevant:\n>\n> https://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html\n>\n> The blog post says: \"Updates - To understand the small regression\n> mentioned above for the l.i1 test (more CPU & write IO) I repeated the\n> test with 100M rows using 2 configurations: one disabled index\n> deduplication and the other disabled insert-triggered autovacuum.\n> Disabling index deduplication had no effect and disabling\n> insert-triggered autovacuum resolves the regression.\"\n>\n> This is quite specifically with an insert-only workload, with 4\n> indexes (that's from memory, but I'm pretty sure it's 4). I think that\n> the failure to account for skipping index scans is probably the big\n> problem here. Scanning the heap to set VM bits is unlikely to be\n> expensive compared to the full index scans. An insert-only workload is\n> going to find most of the heap blocks it scans to set VM bits in\n> shared_buffers. Not so for the indexes.\n>\n> So in Postgres 13 we have this autovacuum_vacuum_insert_scale_factor\n> issue, in addition to the deduplication + btvacuumcleanup issue we\n> talked about (the problems left by my Postgres 13 bug fix commit\n> 48e12913). These two issues make removing\n> vacuum_cleanup_index_scale_factor tempting, even in the back branches\n> -- it might actually be the more conservative approach, at least for\n> Postgres 13.\n\nYeah, this argument makes sense to me. The default values of\nautovacuum_vacuum_insert_scale_factor/threshold are 0.2 and 1000\nrespectively whereas one of vacuum_cleanup_index_scale_factor is 0.1.\nIt means that in insert-only workload with default settings,\nautovacuums triggered by autovacuum_vacuum_insert_scale_factor always\nscan the all btree index to update the index statistics. I think most\nusers would not expect this behavior. As I mentioned above, I think we\ncan have nbtree not use this parameter or increase the default value\nof vacuum_cleanup_index_scale_factor in back branches.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 2 Mar 2021 13:06:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Mar 1, 2021 at 8:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think that removing vacuum_cleanup_index_scale_factor in the back\n> branches would affect the existing installation much. It would be\n> better to have btree indexes not use this parameter while not changing\n> the contents of meta page. That is, just remove the check related to\n> vacuum_cleanup_index_scale_factor from _bt_vacuum_needs_cleanup().\n\nThat's really what I meant -- we cannot just remove a GUC or storage\nparam in the backbranches, of course (it breaks postgresql.conf, stuff\nlike that). But we can disable GUCs at the code level.\n\n> And\n> I personally prefer to fix the \"IndexVacuumInfo.num_heap_tuples is\n> inaccurate outside of btvacuumcleanup-only VACUUMs\" bug separately.\n\nI have not decided on my own position on the backbranches. Hopefully\nthere will be clear guidance from other hackers.\n\n> Yeah, this argument makes sense to me. The default values of\n> autovacuum_vacuum_insert_scale_factor/threshold are 0.2 and 1000\n> respectively whereas one of vacuum_cleanup_index_scale_factor is 0.1.\n> It means that in insert-only workload with default settings,\n> autovacuums triggered by autovacuum_vacuum_insert_scale_factor always\n> scan the all btree index to update the index statistics. I think most\n> users would not expect this behavior. As I mentioned above, I think we\n> can have nbtree not use this parameter or increase the default value\n> of vacuum_cleanup_index_scale_factor in back branches.\n\nIt's not just a problem when autovacuum_vacuum_insert_scale_factor\ntriggers a cleanup-only VACUUM in all indexes. It's also a problem\nwith cases where there is a small number of dead tuples by an\nautovacuum VACUUM triggered by autovacuum_vacuum_insert_scale_factor.\nIt will get index scans done by btbulkdeletes() -- which are more\nexpensive than a VACUUM that only calls btvacuumcleanup().\n\nOf course this is exactly what the patch you're working on for\nPostgres 14 helps with. It's actually not very different (1 dead tuple\nand 0 dead tuples are not very different). So it makes sense that we\nended up here -- vacuumlazy.c alone should be in control of this\nstuff, because only vacuumlazy.c has the authority to see that 1 dead\ntuple and 0 dead tuples should be considered the same thing (or almost\nthe same). So...maybe we can only truly fix the problem in Postgres 14\nanyway, and should just accept that?\n\nOTOH scanning the indexes for no reason when\nautovacuum_vacuum_insert_scale_factor triggers an autovacuum VACUUM\ndoes seem *particularly* silly. So I don't know what to think.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 Mar 2021 20:42:34 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Mar 2, 2021 at 1:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Mar 2, 2021 at 6:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Sun, Feb 28, 2021 at 8:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > Even though \"the letter of the law\" favors removing the\n> > > > vacuum_cleanup_index_scale_factor GUC + param in the way I have\n> > > > outlined, that is not the only thing that matters -- we must also\n> > > > consider \"the spirit of the law\".\n> >\n> > > > I suppose I could ask Tom what he thinks?\n> > >\n> > > +1\n> >\n> > Are you going to start a new thread, or should I?\n>\n> Ok, I'll start a new thread soon.\n\nI've started a new thread[1]. Please feel free to add your thoughts.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoA4WHthN5uU6%2BWScZ7%2BJ_RcEjmcuH94qcoUPuB42ShXzg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 2 Mar 2021 15:35:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Mar 2, 2021 at 1:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 1, 2021 at 8:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I think that removing vacuum_cleanup_index_scale_factor in the back\n> > branches would affect the existing installation much. It would be\n> > better to have btree indexes not use this parameter while not changing\n> > the contents of meta page. That is, just remove the check related to\n> > vacuum_cleanup_index_scale_factor from _bt_vacuum_needs_cleanup().\n>\n> That's really what I meant -- we cannot just remove a GUC or storage\n> param in the backbranches, of course (it breaks postgresql.conf, stuff\n> like that). But we can disable GUCs at the code level.\n\nOh ok, I misunderstood.\n\n>\n> > And\n> > I personally prefer to fix the \"IndexVacuumInfo.num_heap_tuples is\n> > inaccurate outside of btvacuumcleanup-only VACUUMs\" bug separately.\n>\n> I have not decided on my own position on the backbranches. Hopefully\n> there will be clear guidance from other hackers.\n\n+1\n\n>\n> > Yeah, this argument makes sense to me. The default values of\n> > autovacuum_vacuum_insert_scale_factor/threshold are 0.2 and 1000\n> > respectively whereas one of vacuum_cleanup_index_scale_factor is 0.1.\n> > It means that in insert-only workload with default settings,\n> > autovacuums triggered by autovacuum_vacuum_insert_scale_factor always\n> > scan the all btree index to update the index statistics. I think most\n> > users would not expect this behavior. As I mentioned above, I think we\n> > can have nbtree not use this parameter or increase the default value\n> > of vacuum_cleanup_index_scale_factor in back branches.\n>\n> It's not just a problem when autovacuum_vacuum_insert_scale_factor\n> triggers a cleanup-only VACUUM in all indexes. It's also a problem\n> with cases where there is a small number of dead tuples by an\n> autovacuum VACUUM triggered by autovacuum_vacuum_insert_scale_factor.\n> It will get index scans done by btbulkdeletes() -- which are more\n> expensive than a VACUUM that only calls btvacuumcleanup().\n>\n> Of course this is exactly what the patch you're working on for\n> Postgres 14 helps with. It's actually not very different (1 dead tuple\n> and 0 dead tuples are not very different). So it makes sense that we\n> ended up here -- vacuumlazy.c alone should be in control of this\n> stuff, because only vacuumlazy.c has the authority to see that 1 dead\n> tuple and 0 dead tuples should be considered the same thing (or almost\n> the same). So...maybe we can only truly fix the problem in Postgres 14\n> anyway, and should just accept that?\n\nYeah, I think that's right.\n\nPerhaps we can do something so that autovacuums triggered by\nautovacuum_vacuum_insert_scale_factor are triggered on only a true\ninsert-only case (e.g., by checking if n_dead_tup is 0).\n\n>\n> OTOH scanning the indexes for no reason when\n> autovacuum_vacuum_insert_scale_factor triggers an autovacuum VACUUM\n> does seem *particularly* silly.\n\nAgreed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 8 Mar 2021 13:52:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Sun, Mar 7, 2021 at 8:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Yeah, I think that's right.\n>\n> Perhaps we can do something so that autovacuums triggered by\n> autovacuum_vacuum_insert_scale_factor are triggered on only a true\n> insert-only case (e.g., by checking if n_dead_tup is 0).\n\nRight -- that's really what it would mean to \"remove\nvacuum_cleanup_index_scale_factor in the backbranches\".\n\nI now think that it won't even be necessary to make many changes\nwithin VACUUM ANALYZE to avoid unwanted side-effects from removing\nvacuum_cleanup_index_scale_factor, per my mail to Tom today:\n\nhttps://postgr.es/m/CAH2-WzknxdComjhqo4SUxVFk_Q1171GJO2ZgHZ1Y6pion6u8rA@mail.gmail.com\n\nI'm starting to lean towards \"removing\nvacuum_cleanup_index_scale_factor\" in Postgres 13 and master only,\npurely to fix the two issues in Postgres 13 (the insert-driven vacuum\nissue and the deduplication stats issue I go into in the mail I link\nto). A much more conservative approach should be used to fix the more\nsuperficial issue -- the issue of getting an accurate value (for\npg_class.teltuples) from \"info->num_heap_tuples\". As discussed\nalready, the conservative fix is to delay reading\n\"info->num_heap_tuples\" until btvacuumcleanup(), even in cases where\nthere are btbulkdelete() calls for the VACUUM.\n\nThen we can then revisit your patch to make vacuumlazy.c skip index\nvacuuming when there are very few dead tuples, but more than 0 dead\ntuples [1]. I should be able to commit that for Postgres 14.\n\n(I will probably finish off my other patch to make nbtree VACUUM\nrecycle pages deleted during the same VACUUM operation last of all.)\n\n[1] https://postgr.es/m/CAD21AoAtZb4+HJT_8RoOXvu4HM-Zd4HKS3YSMCH6+-W=bDyh-w@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 8 Mar 2021 18:03:43 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Mar 1, 2021 at 7:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v8 of the patch series, which has new patches. No real\n> changes compared to v7 for the first patch, though.\n\nHere is another bitrot-fix-only revision, v9. Just the recycling patch again.\n\nI'll commit this when we get your patch committed. Still haven't\ndecided on exactly how more aggressive we should be. For example the\nuse of the heap relation within _bt_newly_deleted_pages_recycle()\nmight have unintended consequences for recycling efficiency with some\nworkloads, since it doesn't agree with _bt_getbuf() (it is still \"more\nambitious\" than _bt_getbuf(), at least for now).\n\n-- \nPeter Geoghegan", "msg_date": "Wed, 10 Mar 2021 17:34:12 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Mar 10, 2021 at 5:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Here is another bitrot-fix-only revision, v9. Just the recycling patch again.\n\nI committed the final nbtree page deletion patch just now -- the one\nthat attempts to make recycling happen for newly deleted pages. Thanks\nfor all your work on patch review, Masahiko!\n\nI'll close out the CF item for this patch series now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:27:40 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Mon, Mar 22, 2021 at 7:27 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Mar 10, 2021 at 5:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Here is another bitrot-fix-only revision, v9. Just the recycling patch again.\n>\n> I committed the final nbtree page deletion patch just now -- the one\n> that attempts to make recycling happen for newly deleted pages. Thanks\n> for all your work on patch review, Masahiko!\n\nYou're welcome! Those are really good improvements.\n\nBy this patch series, btree indexes became like hash indexes in terms\nof amvacuumcleanup. We do an index scan at btvacuumcleanup() in the\ntwo cases: metapage upgrading and more than 5%\ndeleted-but-not-yet-recycled pages. Both cases seem rare cases. So do\nwe want to disable parallel index cleanup for btree indexes like hash\nindexes? That is, remove VACUUM_OPTION_PARALLEL_COND_CLEANUP from\namparallelvacuumoptions. IMO we can live with the current\nconfiguration just in case where the user runs into such rare\nsituations (especially for the latter case). In most cases, parallel\nvacuum workers for index cleanup might exit with no-op but the\nside-effect (wasting resources and overhead etc) would not be big. If\nwe want to enable it only in particular cases, we would need to have\nanother way for index AM to tell lazy vacuum whether or not to allow a\nparallel worker to process the index at that time. What do you think?\n\nI’m not sure we need changes but I think it’s worth discussing here.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 24 Mar 2021 00:13:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Tue, Mar 23, 2021 at 8:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> By this patch series, btree indexes became like hash indexes in terms\n> of amvacuumcleanup. We do an index scan at btvacuumcleanup() in the\n> two cases: metapage upgrading and more than 5%\n> deleted-but-not-yet-recycled pages. Both cases seem rare cases. So do\n> we want to disable parallel index cleanup for btree indexes like hash\n> indexes? That is, remove VACUUM_OPTION_PARALLEL_COND_CLEANUP from\n> amparallelvacuumoptions.\n\nMy recent \"Recycle nbtree pages deleted during same VACUUM\" commit\nimproved the efficiency of recycling, but I still think that it was a\nbit of a hack. Or at least it didn't go far enough in fixing the old\ndesign, which is itself a bit of a hack.\n\nAs I said back on February 15, a truly good design for nbtree page\ndeletion + recycling would have crash safety built in. If page\ndeletion itself is crash safe, it really makes sense to make\neverything crash safe (especially because we're managing large chunks\nof equisized free space, unlike in heapam). And as I also said back\nthen, a 100% crash-safe design could naturally shift the problem of\nnbtree page recycle safety from the producer/VACUUM side, to the\nconsumer/_bt_getbuf() side. It should be completely separated from\nwhen VACUUM runs, and what VACUUM can discover about recycle safety in\npassing, at the end.\n\nThat approach would completely eliminate the need to do any work in\nbtvacuumcleanup(), which would make it natural to remove\nVACUUM_OPTION_PARALLEL_COND_CLEANUP from nbtree -- the implementation\nof btvacuumcleanup() would just look like hashvacuumcleanup() does now\n-- it could do practically nothing, making this 100% okay.\n\nFor now I have my doubts that it is appropriate to make this change.\nIt seems as if the question of whether or not\nVACUUM_OPTION_PARALLEL_COND_CLEANUP should be used is basically the\nsame question as \"Does the vacuumcleanup() callback for this index AM\nlook exactly like hashvacuumcleanup()?\".\n\n> IMO we can live with the current\n> configuration just in case where the user runs into such rare\n> situations (especially for the latter case). In most cases, parallel\n> vacuum workers for index cleanup might exit with no-op but the\n> side-effect (wasting resources and overhead etc) would not be big. If\n> we want to enable it only in particular cases, we would need to have\n> another way for index AM to tell lazy vacuum whether or not to allow a\n> parallel worker to process the index at that time. What do you think?\n\nI am concerned about unintended consequences, like never noticing that\nwe should really recycle known deleted pages not yet placed in the FSM\n(it's hard to think through very rare cases like this with\nconfidence). Is it really so bad if we launch parallel workers that we\ndon't really need for a parallel VACUUM?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 23 Mar 2021 20:09:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" }, { "msg_contents": "On Wed, Mar 24, 2021 at 12:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Mar 23, 2021 at 8:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > By this patch series, btree indexes became like hash indexes in terms\n> > of amvacuumcleanup. We do an index scan at btvacuumcleanup() in the\n> > two cases: metapage upgrading and more than 5%\n> > deleted-but-not-yet-recycled pages. Both cases seem rare cases. So do\n> > we want to disable parallel index cleanup for btree indexes like hash\n> > indexes? That is, remove VACUUM_OPTION_PARALLEL_COND_CLEANUP from\n> > amparallelvacuumoptions.\n>\n> My recent \"Recycle nbtree pages deleted during same VACUUM\" commit\n> improved the efficiency of recycling, but I still think that it was a\n> bit of a hack. Or at least it didn't go far enough in fixing the old\n> design, which is itself a bit of a hack.\n>\n> As I said back on February 15, a truly good design for nbtree page\n> deletion + recycling would have crash safety built in. If page\n> deletion itself is crash safe, it really makes sense to make\n> everything crash safe (especially because we're managing large chunks\n> of equisized free space, unlike in heapam). And as I also said back\n> then, a 100% crash-safe design could naturally shift the problem of\n> nbtree page recycle safety from the producer/VACUUM side, to the\n> consumer/_bt_getbuf() side. It should be completely separated from\n> when VACUUM runs, and what VACUUM can discover about recycle safety in\n> passing, at the end.\n>\n> That approach would completely eliminate the need to do any work in\n> btvacuumcleanup(), which would make it natural to remove\n> VACUUM_OPTION_PARALLEL_COND_CLEANUP from nbtree -- the implementation\n> of btvacuumcleanup() would just look like hashvacuumcleanup() does now\n> -- it could do practically nothing, making this 100% okay.\n>\n> For now I have my doubts that it is appropriate to make this change.\n> It seems as if the question of whether or not\n> VACUUM_OPTION_PARALLEL_COND_CLEANUP should be used is basically the\n> same question as \"Does the vacuumcleanup() callback for this index AM\n> look exactly like hashvacuumcleanup()?\".\n>\n> > IMO we can live with the current\n> > configuration just in case where the user runs into such rare\n> > situations (especially for the latter case). In most cases, parallel\n> > vacuum workers for index cleanup might exit with no-op but the\n> > side-effect (wasting resources and overhead etc) would not be big. If\n> > we want to enable it only in particular cases, we would need to have\n> > another way for index AM to tell lazy vacuum whether or not to allow a\n> > parallel worker to process the index at that time. What do you think?\n>\n> I am concerned about unintended consequences, like never noticing that\n> we should really recycle known deleted pages not yet placed in the FSM\n> (it's hard to think through very rare cases like this with\n> confidence). Is it really so bad if we launch parallel workers that we\n> don't really need for a parallel VACUUM?\n\nI don't think it's too bad even if we launch parallel workers for\nindexes that don’t really need to be processed by parallel workers.\nParallel workers exit immediately after all indexes are vacuumed so it\nwould not affect other parallel operations. There is nothing change in\nterms of in terms of DSM usage since btree indexes support parallel\nbulkdelete.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 25 Mar 2021 16:03:27 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: 64-bit XIDs in deleted nbtree pages" } ]
[ { "msg_contents": "Over in [1] it was noted that the system behaves rather oddly if\nyou try to do ALTER USER/DATABASE SET with a custom GUC name\ncontaining \"=\" or \"-\". I think we should just disallow such cases.\nRelaxing the restriction is harder than it might seem:\n\n* The convention for entries in pg_db_role_setting is just\n\"name=value\" with no quoting rule, so GUC names containing \"=\"\ncan't work. We could imagine installing some kind of quoting rule,\nbut that would break client-side code that looks at this catalog;\npg_dump, for one, does so. On balance it seems clearly not worth\nchanging that.\n\n* The problem with using \"-\" is that we parse pg_db_role_setting\nentries with ParseLongOption(), which converts \"-\" to \"_\" because\nthat's what makes sense to do in the context of command-line switches\nsuch as \"-c work-mem=42MB\". We could imagine adjusting the code to\nnot do that in the pg_db_role_setting case, but you'd still be left\nwith a GUC that cannot be set via PGOPTIONS=\"-c custom.my-guc=42\".\nTo avoid that potential confusion, it seems best to ban \"-\" as well\nas \"=\".\n\nNow granting that the best answer is just to forbid these cases,\nthere are still a couple of decisions about how extensive the\nprohibition ought to be:\n\n* We could forbid these characters only when you try to actually\nput such a GUC into pg_db_role_setting, and otherwise allow them.\nThat seems like a weird nonorthogonal choice though, so I'd\nrather just forbid them period.\n\n* A case could be made for tightening things up a lot more, and not\nallowing anything that doesn't look like an identifier. I'm not\npushing for that, as it seems more likely to break existing\napplications than the narrow restriction proposed here. But I could\nlive with it if people prefer that way.\n\nAnyway, attached is a proposed patch that implements the restriction\nas stated. I'm inclined to propose this for HEAD only and not\nworry about the issue in the back branches.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20210209144059.GA21360%40depesz.com", "msg_date": "Tue, 09 Feb 2021 17:34:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Tightening up allowed custom GUC names" }, { "msg_contents": "On Tue, Feb 09, 2021 at 05:34:37PM -0500, Tom Lane wrote:\n> Now granting that the best answer is just to forbid these cases,\n> there are still a couple of decisions about how extensive the\n> prohibition ought to be:\n> \n> * We could forbid these characters only when you try to actually\n> put such a GUC into pg_db_role_setting, and otherwise allow them.\n> That seems like a weird nonorthogonal choice though, so I'd\n> rather just forbid them period.\n\nAgreed.\n\n> * A case could be made for tightening things up a lot more, and not\n> allowing anything that doesn't look like an identifier. I'm not\n> pushing for that, as it seems more likely to break existing\n> applications than the narrow restriction proposed here. But I could\n> live with it if people prefer that way.\n\nI'd prefer that. Characters like backslash, space, and double quote have\nsignificant potential to reveal bugs, while having negligible application\nbeyond revealing bugs.\n\n\n", "msg_date": "Tue, 9 Feb 2021 15:01:55 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Tue, Feb 09, 2021 at 05:34:37PM -0500, Tom Lane wrote:\n>> * A case could be made for tightening things up a lot more, and not\n>> allowing anything that doesn't look like an identifier. I'm not\n>> pushing for that, as it seems more likely to break existing\n>> applications than the narrow restriction proposed here. But I could\n>> live with it if people prefer that way.\n\n> I'd prefer that. Characters like backslash, space, and double quote have\n> significant potential to reveal bugs, while having negligible application\n> beyond revealing bugs.\n\nAny other opinions here? I'm hesitant to make such a change on the\nbasis of just one vote.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 13:32:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "чц, 11 лют 2021, 21:33 карыстальнік Tom Lane <tgl@sss.pgh.pa.us> напісаў:\n\n> Noah Misch <noah@leadboat.com> writes:\n> > On Tue, Feb 09, 2021 at 05:34:37PM -0500, Tom Lane wrote:\n> >> * A case could be made for tightening things up a lot more, and not\n> >> allowing anything that doesn't look like an identifier. I'm not\n> >> pushing for that, as it seems more likely to break existing\n> >> applications than the narrow restriction proposed here. But I could\n> >> live with it if people prefer that way.\n>\n> > I'd prefer that. Characters like backslash, space, and double quote have\n> > significant potential to reveal bugs, while having negligible application\n> > beyond revealing bugs.\n>\n> Any other opinions here? I'm hesitant to make such a change on the\n> basis of just one vote.\n>\n\n+1 for the change. I have not seen usage of = and - in the wild in GUC\nnames but can see a harm of mis-interpretation of these.\n\n\n\n\n> regards, tom lane\n>\n>\n>\n\nчц, 11 лют 2021, 21:33 карыстальнік Tom Lane <tgl@sss.pgh.pa.us> напісаў:Noah Misch <noah@leadboat.com> writes:\n> On Tue, Feb 09, 2021 at 05:34:37PM -0500, Tom Lane wrote:\n>> * A case could be made for tightening things up a lot more, and not\n>> allowing anything that doesn't look like an identifier.  I'm not\n>> pushing for that, as it seems more likely to break existing\n>> applications than the narrow restriction proposed here.  But I could\n>> live with it if people prefer that way.\n\n> I'd prefer that.  Characters like backslash, space, and double quote have\n> significant potential to reveal bugs, while having negligible application\n> beyond revealing bugs.\n\nAny other opinions here?  I'm hesitant to make such a change on the\nbasis of just one vote.+1 for the change. I have not seen usage of = and - in the wild in GUC names but can see a harm of mis-interpretation of these. \n\n                        regards, tom lane", "msg_date": "Thu, 11 Feb 2021 21:59:46 +0300", "msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>", "msg_from_op": false, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "On Tue, Feb 9, 2021 at 6:02 PM Noah Misch <noah@leadboat.com> wrote:\n> > * A case could be made for tightening things up a lot more, and not\n> > allowing anything that doesn't look like an identifier. I'm not\n> > pushing for that, as it seems more likely to break existing\n> > applications than the narrow restriction proposed here. But I could\n> > live with it if people prefer that way.\n>\n> I'd prefer that. Characters like backslash, space, and double quote have\n> significant potential to reveal bugs, while having negligible application\n> beyond revealing bugs.\n\nI'm not sure exactly what the rule should be here, but in general I\nagree that a broader prohibition might be better. It's hard to\nunderstand the rationale behind a system that doesn't allow\nrobert.max-workers as a GUC name, but does permit ro\nb\"ert.max^Hworkers.\n\n+1 for not back-patching whatever we do here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Feb 2021 14:50:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "\nOn 2/11/21 1:32 PM, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n>> On Tue, Feb 09, 2021 at 05:34:37PM -0500, Tom Lane wrote:\n>>> * A case could be made for tightening things up a lot more, and not\n>>> allowing anything that doesn't look like an identifier. I'm not\n>>> pushing for that, as it seems more likely to break existing\n>>> applications than the narrow restriction proposed here. But I could\n>>> live with it if people prefer that way.\n>> I'd prefer that. Characters like backslash, space, and double quote have\n>> significant potential to reveal bugs, while having negligible application\n>> beyond revealing bugs.\n> Any other opinions here? I'm hesitant to make such a change on the\n> basis of just one vote.\n>\n> \t\t\t\n\n\n\nThat might be a bit restrictive. I could at least see allowing '-' as\nreasonable, and maybe ':'. Not sure about other punctuation characters.\nOTOH I'd be surprised if the identifier restriction would burden a large\nnumber of people.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Feb 2021 15:04:17 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "On Thu, Feb 11, 2021 at 02:50:13PM -0500, Robert Haas wrote:\n> +1 for not back-patching whatever we do here.\n\n+1.\n--\nMichael", "msg_date": "Sat, 13 Feb 2021 11:34:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "[ getting back to this, after a bit of procrastination ]\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n> On 2/11/21 1:32 PM, Tom Lane wrote:\n>> Noah Misch <noah@leadboat.com> writes:\n>>> On Tue, Feb 09, 2021 at 05:34:37PM -0500, Tom Lane wrote:\n>>>> * A case could be made for tightening things up a lot more, and not\n>>>> allowing anything that doesn't look like an identifier. I'm not\n>>>> pushing for that, as it seems more likely to break existing\n>>>> applications than the narrow restriction proposed here. But I could\n>>>> live with it if people prefer that way.\n\n>>> I'd prefer that. Characters like backslash, space, and double quote have\n>>> significant potential to reveal bugs, while having negligible application\n>>> beyond revealing bugs.\n\n> That might be a bit restrictive. I could at least see allowing '-' as\n> reasonable, and maybe ':'. Not sure about other punctuation characters.\n> OTOH I'd be surprised if the identifier restriction would burden a large\n> number of people.\n\nWe can't allow '-', for the specific reason that it won't work as a -c\nargument (thanks to -c's translation of '-' to '_'). The whole point here\nis to prevent corner cases like that. ':' would be all right, but I think\nit's a lot simpler to explain and a lot harder to break in future if we\njust say that the names have to be valid identifiers.\n\nPatch that does it like that attached.\n\n(I concur with the downthread opinions that we shouldn't back-patch this.)\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 15 Mar 2021 14:49:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Tightening up allowed custom GUC names" }, { "msg_contents": "I wrote:\n> We can't allow '-', for the specific reason that it won't work as a -c\n> argument (thanks to -c's translation of '-' to '_'). The whole point here\n> is to prevent corner cases like that. ':' would be all right, but I think\n> it's a lot simpler to explain and a lot harder to break in future if we\n> just say that the names have to be valid identifiers.\n\nHearing no further comments, I pushed the more restrictive version.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Apr 2021 11:23:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Tightening up allowed custom GUC names" } ]
[ { "msg_contents": "Hi Hackers,\n\nPer Coverity.\n\nCoverity complaints about pg_cryptohash_final function.\nAnd I agree with Coverity, it's a bad design.\nIts allows this:\n\n#define MY_RESULT_LENGTH 32\n\nfunction pgtest(char * buffer, char * text) {\npg_cryptohash_ctx *ctx;\nuint8 digest[MY_RESULT_LENGTH];\n\nctx = pg_cryptohash_create(PG_SHA512);\npg_cryptohash_init(ctx);\npg_cryptohash_update(ctx, (uint8 *) buffer, text);\npg_cryptohash_final(ctx, digest); // <-- CID 1446240 (#1 of 1):\nOut-of-bounds access (OVERRUN)\npg_cryptohash_free(ctx);\nreturn\n}\n\nAttached has a patch with suggestions to make things better.\n\nregards,\nRanier Vilela", "msg_date": "Tue, 9 Feb 2021 22:01:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "At Tue, 9 Feb 2021 22:01:45 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi Hackers,\n> \n> Per Coverity.\n> \n> Coverity complaints about pg_cryptohash_final function.\n> And I agree with Coverity, it's a bad design.\n> Its allows this:\n> \n> #define MY_RESULT_LENGTH 32\n> \n> function pgtest(char * buffer, char * text) {\n> pg_cryptohash_ctx *ctx;\n> uint8 digest[MY_RESULT_LENGTH];\n> \n> ctx = pg_cryptohash_create(PG_SHA512);\n> pg_cryptohash_init(ctx);\n> pg_cryptohash_update(ctx, (uint8 *) buffer, text);\n> pg_cryptohash_final(ctx, digest); // <-- CID 1446240 (#1 of 1):\n> Out-of-bounds access (OVERRUN)\n> pg_cryptohash_free(ctx);\n> return\n> }\n>\n> Attached has a patch with suggestions to make things better.\n\nI'm not sure about the details, but it looks like broken.\n\nmake complains for inconsistent prototypes abd cryptohahs.c and sha1.c\ndoesn't seem to agree on its interface.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 10 Feb 2021 12:13:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per\n Coverity)" }, { "msg_contents": "At Wed, 10 Feb 2021 12:13:44 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 9 Feb 2021 22:01:45 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > Hi Hackers,\n> > \n> > Per Coverity.\n> > \n> > Coverity complaints about pg_cryptohash_final function.\n> > And I agree with Coverity, it's a bad design.\n> > Its allows this:\n> > \n> > #define MY_RESULT_LENGTH 32\n> > \n> > function pgtest(char * buffer, char * text) {\n> > pg_cryptohash_ctx *ctx;\n> > uint8 digest[MY_RESULT_LENGTH];\n> > \n> > ctx = pg_cryptohash_create(PG_SHA512);\n> > pg_cryptohash_init(ctx);\n> > pg_cryptohash_update(ctx, (uint8 *) buffer, text);\n> > pg_cryptohash_final(ctx, digest); // <-- CID 1446240 (#1 of 1):\n> > Out-of-bounds access (OVERRUN)\n> > pg_cryptohash_free(ctx);\n> > return\n> > }\n> >\n> > Attached has a patch with suggestions to make things better.\n> \n> I'm not sure about the details, but it looks like broken.\n> \n> make complains for inconsistent prototypes abd cryptohahs.c and sha1.c\n> doesn't seem to agree on its interface.\n\nSorry, my messages was broken.\n\nmake complains for inconsistent prototypes, and cryptohahs.c and\nsha1.c don't seem to agree on the interface of pg_sha1_final.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 10 Feb 2021 12:16:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per\n Coverity)" }, { "msg_contents": "At Tue, 9 Feb 2021 22:01:45 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi Hackers,\n> \n> Per Coverity.\n> \n> Coverity complaints about pg_cryptohash_final function.\n> And I agree with Coverity, it's a bad design.\n> Its allows this:\n> \n> #define MY_RESULT_LENGTH 32\n> \n> function pgtest(char * buffer, char * text) {\n> pg_cryptohash_ctx *ctx;\n> uint8 digest[MY_RESULT_LENGTH];\n> \n> ctx = pg_cryptohash_create(PG_SHA512);\n> pg_cryptohash_init(ctx);\n> pg_cryptohash_update(ctx, (uint8 *) buffer, text);\n> pg_cryptohash_final(ctx, digest); // <-- CID 1446240 (#1 of 1):\n> Out-of-bounds access (OVERRUN)\n> pg_cryptohash_free(ctx);\n> return\n> }\n\nIt seems to me that the above just means the caller must provide a\ndigest buffer that fits the use. In the above example digest just must\nbe 64 byte. If Coverity complains so, what should do for the\ncomplaint is to fix the caller to provide a digest buffer of the\ncorrect size.\n\nCould you show the detailed context where Coverity complained?\n\n> Attached has a patch with suggestions to make things better.\n\nSo it doesn't seem to me the right direction. Even if we are going to\nmake pg_cryptohash_final to take the buffer length, it should\nerror-out or assert-out if the length is too small rather than copy a\npart of the digest bytes. (In short, it would only be assertion-use.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 10 Feb 2021 13:44:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per\n Coverity)" }, { "msg_contents": "On Wed, Feb 10, 2021 at 01:44:12PM +0900, Kyotaro Horiguchi wrote:\n> It seems to me that the above just means the caller must provide a\n> digest buffer that fits the use. In the above example digest just must\n> be 64 byte. If Coverity complains so, what should do for the\n> complaint is to fix the caller to provide a digest buffer of the\n> correct size.\n> \n> Could you show the detailed context where Coverity complained?\n\nFWIW, the community Coverity instance is not complaining here, so\nI have no idea what kind of configuration it uses to generate this\nreport. Saying that, this is just the same idea as cfc40d3 for\nbase64.c and aef8948 for hex.c where we provide the length of the \nresult buffer to be able to control any overflow. So that's a safety\nbelt to avoid a caller to do stupid things where he/she would\noverwrite some memory with a buffer allocation with a size lower than\nthe size of the digest expected in the result generated.\n\n> So it doesn't seem to me the right direction. Even if we are going to\n> make pg_cryptohash_final to take the buffer length, it should\n> error-out or assert-out if the length is too small rather than copy a\n> part of the digest bytes. (In short, it would only be assertion-use.)\n\nYes, we could be more defensive here, and considering libpq I think\nthat this had better be an error rather than an assertion to remain on\nthe safe side. The patch proposed is incomplete on several points:\n- cryptohash_openssl.c is not touched, so this patch will fail to\ncompile with --with-ssl=openssl (or --with-openssl if you want).\n- There is nothing actually checked in the final function. As we\nalready know the size of the result digest, we just need to make sure\nthat the size of the output is at least the size of the digest, so we\ncan just add a check based on MD5_DIGEST_LENGTH and such. There is no\nneed to touch the internal functions of MD5/SHA1/SHA2 for the\nnon-OpenSSL case. For the OpenSSL case, and looking at digest.c in\nthe upstream code, we would need a similar check, as\nEVP_DigestFinal_ex() would happily overwrite the area if the caller is\nnot careful (note that the third argument of the function reports the\nnumber of bytes written, *after* the fact).\n\nI don't see much the point to complicate scram_HMAC_final() and\nscram_H() here, as well as the manipulations done for SCRAM_KEY_LEN in\nscram-common.h.\n--\nMichael", "msg_date": "Wed, 10 Feb 2021 16:17:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em qua., 10 de fev. de 2021 às 01:44, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Tue, 9 Feb 2021 22:01:45 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi Hackers,\n> >\n> > Per Coverity.\n> >\n> > Coverity complaints about pg_cryptohash_final function.\n> > And I agree with Coverity, it's a bad design.\n> > Its allows this:\n> >\n> > #define MY_RESULT_LENGTH 32\n> >\n> > function pgtest(char * buffer, char * text) {\n> > pg_cryptohash_ctx *ctx;\n> > uint8 digest[MY_RESULT_LENGTH];\n> >\n> > ctx = pg_cryptohash_create(PG_SHA512);\n> > pg_cryptohash_init(ctx);\n> > pg_cryptohash_update(ctx, (uint8 *) buffer, text);\n> > pg_cryptohash_final(ctx, digest); // <-- CID 1446240 (#1 of 1):\n> > Out-of-bounds access (OVERRUN)\n> > pg_cryptohash_free(ctx);\n> > return\n> > }\n>\n> It seems to me that the above just means the caller must provide a\n> digest buffer that fits the use. In the above example digest just must\n> be 64 byte. If Coverity complains so, what should do for the\n> complaint is to fix the caller to provide a digest buffer of the\n> correct size.\n>\nExactly.\n\n\n> Could you show the detailed context where Coverity complained?\n>\nCoverity complains about call memcpy with fixed size, in a context with\nbuffer variable size supplied by the caller.\n\n\n>\n> > Attached has a patch with suggestions to make things better.\n>\n> So it doesn't seem to me the right direction. Even if we are going to\n> make pg_cryptohash_final to take the buffer length, it should\n> error-out or assert-out if the length is too small rather than copy a\n> part of the digest bytes. (In short, it would only be assertion-use.)\n>\nIt is necessary to correct the interfaces. To caller, inform the size of\nthe buffer it created.\nI think it should be error-out, because the buffer can be malloc.\n\nregards,\nRanier Vilela\n\nEm qua., 10 de fev. de 2021 às 01:44, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Tue, 9 Feb 2021 22:01:45 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi Hackers,\n> \n> Per Coverity.\n> \n> Coverity complaints about pg_cryptohash_final function.\n> And I agree with Coverity, it's a bad design.\n> Its allows this:\n> \n> #define MY_RESULT_LENGTH 32\n> \n> function pgtest(char * buffer, char * text) {\n> pg_cryptohash_ctx *ctx;\n> uint8 digest[MY_RESULT_LENGTH];\n> \n> ctx = pg_cryptohash_create(PG_SHA512);\n> pg_cryptohash_init(ctx);\n> pg_cryptohash_update(ctx, (uint8 *) buffer, text);\n> pg_cryptohash_final(ctx, digest); // <--  CID 1446240 (#1 of 1):\n> Out-of-bounds access (OVERRUN)\n> pg_cryptohash_free(ctx);\n> return\n> }\n\nIt seems to me that the above just means the caller must provide a\ndigest buffer that fits the use. In the above example digest just must\nbe 64 byte.  If Coverity complains so, what should do for the\ncomplaint is to fix the caller to provide a digest buffer of the\ncorrect size.Exactly.\n\nCould you show the detailed context where Coverity complained?Coverity complains about call memcpy with fixed size, in a context with buffer variable size supplied by the caller. \n\n> Attached has a patch with suggestions to make things better.\n\nSo it doesn't seem to me the right direction. Even if we are going to\nmake pg_cryptohash_final to take the buffer length, it should\nerror-out or assert-out if the length is too small rather than copy a\npart of the digest bytes. (In short, it would only be assertion-use.)It is necessary to correct the interfaces. To caller, inform the size of the buffer it created.I think it should be error-out, because the buffer can be malloc.regards,Ranier Vilela", "msg_date": "Wed, 10 Feb 2021 09:14:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "On Wed, Feb 10, 2021 at 09:14:46AM -0300, Ranier Vilela wrote:\n> It is necessary to correct the interfaces. To caller, inform the size of\n> the buffer it created.\n\nWell, Coverity likes nannyism, so each one of its reports is to take\nwith a pinch of salt, so there is no point to change something that\ndoes not make sense just to please a static analyzer. The context\nof the code matters.\n\nNow, the patch you sent has no need to be that complicated, and it\npartially works while not actually solving at all the problem you are\ntrying to solve (nothing done for MD5 or OpenSSL). Attached is an\nexample of what I finish with while poking at this issue. There is IMO\nno point to touch the internals of SCRAM that all rely on the same\ndigest lengths for the proof generation with SHA256.\n\n> I think it should be error-out, because the buffer can be malloc.\n\nI don't understand what you mean here, but cryptohash[_openssl].c\nshould not issue an error directly, just return a status code that the\ncaller can consume to generate an error.\n--\nMichael", "msg_date": "Thu, 11 Feb 2021 21:47:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em qui., 11 de fev. de 2021 às 09:47, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Feb 10, 2021 at 09:14:46AM -0300, Ranier Vilela wrote:\n> > It is necessary to correct the interfaces. To caller, inform the size of\n> > the buffer it created.\n>\n> Well, Coverity likes nannyism, so each one of its reports is to take\n> with a pinch of salt, so there is no point to change something that\n> does not make sense just to please a static analyzer. The context\n> of the code matters.\n>\nI do not agree. Coverity is a valuable tool that points to bad design\nfunctions.\nAs demonstrated in the first email, it allows the user of the functions to\ncorrupt memory.\nSo it makes perfect sense, fixing the interface to prevent and prevent\nfuture modifications, simply breaking cryptohash api.\n\n\n>\n> Now, the patch you sent has no need to be that complicated, and it\n> partially works while not actually solving at all the problem you are\n> trying to solve (nothing done for MD5 or OpenSSL). Attached is an\n> example of what I finish with while poking at this issue. There is IMO\n> no point to touch the internals of SCRAM that all rely on the same\n> digest lengths for the proof generation with SHA256.\n>\nToo fast. I spent 30 minutes doing the patch.\n\n\n>\n> > I think it should be error-out, because the buffer can be malloc.\n>\n> I don't understand what you mean here, but cryptohash[_openssl].c\n> should not issue an error directly, just return a status code that the\n> caller can consume to generate an error.\n>\nI meant that it is not a case of assertion, as suggested by Kyotaro,\nbecause someone might want to create a dynamic buffer per malloc, to store\nthe digest.\nAnyway, the buffer creator needs to tell the functions what the actual\nbuffer size is, so they can decide what to do.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de fev. de 2021 às 09:47, Michael Paquier <michael@paquier.xyz> escreveu:On Wed, Feb 10, 2021 at 09:14:46AM -0300, Ranier Vilela wrote:\n> It is necessary to correct the interfaces. To caller, inform the size of\n> the buffer it created.\n\nWell, Coverity likes nannyism, so each one of its reports is to take\nwith a pinch of salt, so there is no point to change something that\ndoes not make sense just to please a static analyzer.  The context\nof the code matters.I do not agree. Coverity is a valuable tool that points to bad design functions.As demonstrated in the first email, it allows the user of the functions to corrupt memory.So it makes perfect sense, fixing the interface to prevent and prevent future modifications, simply breaking cryptohash api. \n\nNow, the patch you sent has no need to be that complicated, and it\npartially works while not actually solving at all the problem you are\ntrying to solve (nothing done for MD5 or OpenSSL).  Attached is an\nexample of what I finish with while poking at this issue. There is IMO\nno point to touch the internals of SCRAM that all rely on the same\ndigest lengths for the proof generation with SHA256.Too fast. I spent 30 minutes doing the patch. \n\n> I think it should be error-out, because the buffer can be malloc.\n\nI don't understand what you mean here, but cryptohash[_openssl].c\nshould not issue an error directly, just return a status code that the\ncaller can consume to generate an error.I meant that it is not a case of assertion, as suggested by Kyotaro, because someone might want to create a dynamic buffer per malloc, to store the digest.Anyway, the buffer creator needs to tell the functions what the actual buffer size is, so they can decide what to do.regards,Ranier Vilela", "msg_date": "Thu, 11 Feb 2021 10:20:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em qui., 11 de fev. de 2021 às 09:47, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Feb 10, 2021 at 09:14:46AM -0300, Ranier Vilela wrote:\n> > It is necessary to correct the interfaces. To caller, inform the size of\n> > the buffer it created.\n>\n> Now, the patch you sent has no need to be that complicated, and it\n> partially works while not actually solving at all the problem you are\n> trying to solve (nothing done for MD5 or OpenSSL). Attached is an\n> example of what I finish with while poking at this issue. There is IMO\n> no point to touch the internals of SCRAM that all rely on the same\n> digest lengths for the proof generation with SHA256.\n>\nOk, I take a look at your patch and I have comments:\n\n1. Looks missed contrib/pgcrypto.\n2. scram_HMAC_final function still have a exchanged parameters,\n which in the future may impair maintenance.\n\nAttached the v3 same patch.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 11 Feb 2021 19:55:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "At Thu, 11 Feb 2021 19:55:45 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Em qui., 11 de fev. de 2021 às 09:47, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n> \n> > On Wed, Feb 10, 2021 at 09:14:46AM -0300, Ranier Vilela wrote:\n> > > It is necessary to correct the interfaces. To caller, inform the size of\n> > > the buffer it created.\n> >\n> > Now, the patch you sent has no need to be that complicated, and it\n> > partially works while not actually solving at all the problem you are\n> > trying to solve (nothing done for MD5 or OpenSSL). Attached is an\n> > example of what I finish with while poking at this issue. There is IMO\n> > no point to touch the internals of SCRAM that all rely on the same\n> > digest lengths for the proof generation with SHA256.\n> >\n> Ok, I take a look at your patch and I have comments:\n> \n> 1. Looks missed contrib/pgcrypto.\n> 2. scram_HMAC_final function still have a exchanged parameters,\n> which in the future may impair maintenance.\n\nThe v3 drops the changes of the uuid_ossp contrib. I'm not sure the\nchange of scram_HMAC_final is needed.\n\nIn v2, int_md5_finish() calls pg_cryptohash_final() with\nh->block_size(h) (64) but it should be h->result_size(h)\n(16). int_sha1_finish() is wrong the same way. (and, v3 seems fixing\nthem in the wrong way.)\n\nAlthough I don't oppose to make things defensive, I think the derived\ninterfaces should be defensive in the same extent if we do. Especially\nthe calls to the function in checksum_helper.c is just nullifying the\nprotection.\n\nFor now, we can actually protect from too-short buffers in the\nfollowing places. pg_cryptohash_final receives the buffer length\nirrelevant to the actual length in other places.\n\n0/3 places in pgcrypto.\n2/2 places in uuid-ossp.\n1/1 place in auth-scram.c\n1/1 place in backup_manifest.c\n1/1 place in cryptohashfuncs.c\n1/1 place in parse_manifest.c\n0/4 places in checksum_helper.c\n1/2 place in md5_common.c\n2/4 places in scram-common.c (The two places are claimed not to need the protection.)\n\nTotal 9/19 places. I think at least pg_checksum_final() should take\nthe buffer length. I'm not sure about px_md_finish() and\nhmac_md_finish()..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Feb 2021 15:21:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per\n Coverity)" }, { "msg_contents": "On Fri, Feb 12, 2021 at 03:21:40PM +0900, Kyotaro Horiguchi wrote:\n> The v3 drops the changes of the uuid_ossp contrib. I'm not sure the\n> change of scram_HMAC_final is needed.\n\nMeaning that v3 would fail to compile uuid-ossp. v3 also produces\ncompilation warnings in auth-scram.c.\n\n> In v2, int_md5_finish() calls pg_cryptohash_final() with\n> h->block_size(h) (64) but it should be h->result_size(h)\n> (16). int_sha1_finish() is wrong the same way. (and, v3 seems fixing\n> them in the wrong way.)\n\nRight. These should just use h->result_size(h), and not\nh->block_size(h).\n\n-extern int scram_HMAC_final(uint8 *result, scram_HMAC_ctx *ctx);\n+extern int scram_HMAC_final(scram_HMAC_ctx *ctx, uint8 *result);\nThere is no point in this change. You just make back-patching harder\nwhile doing nothing about the problem at hand.\n\n- if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf) < 0)\n+ if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf,\n+ PG_SHA256_DIGEST_LENGTH) < 0)\nHere this could just use sizeof(checksumbuf)? This pattern could be\nused elsewhere as well, like in md5_common.c.\n\n> Although I don't oppose to make things defensive, I think the derived\n> interfaces should be defensive in the same extent if we do. Especially\n> the calls to the function in checksum_helper.c is just nullifying the\n> protection.\n\nThe checksum stuff just relies on PG_CHECKSUM_MAX_LENGTH and there are\nalready static assertions used as sanity checks, so I see little point\nin adding a new argument that would be just PG_CHECKSUM_MAX_LENGTH.\nThis backup checksum code is already very specific, and it is not\nintended for uses as generic as the cryptohash functions. With such a\nchange, my guess is that it becomes really easy to miss that\npg_checksum_final() has to return the size of the digest result, and\nnot the maximum buffer size allocation. Perhaps one thing this part\ncould do is just to save the digest length in a variable and use it\nfor retval and the third argument of pg_cryptohash_final(), but the\nimpact looks limited.\n\n> Total 9/19 places. I think at least pg_checksum_final() should take\n> the buffer length. I'm not sure about px_md_finish() and\n> hmac_md_finish()..\n\nI guess that you mean px_hmac_finish() for the second one. The first\none is tied to passing down result_size() and down to the cryptohash\nfunctoins, meaning that there is no need to take about it more than\nthat IMO. The second one would be tied to the HMAC refactoring. This\nwould be valuable in the case of pgcrypto when building with OpenSSL,\nmeaning that the code would go through the defenses put in place at\nthe PG level.\n--\nMichael", "msg_date": "Sat, 13 Feb 2021 10:47:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em sex., 12 de fev. de 2021 às 22:47, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Fri, Feb 12, 2021 at 03:21:40PM +0900, Kyotaro Horiguchi wrote:\n> > The v3 drops the changes of the uuid_ossp contrib. I'm not sure the\n> > change of scram_HMAC_final is needed.\n>\n> Meaning that v3 would fail to compile uuid-ossp. v3 also produces\n> compilation warnings in auth-scram.c.\n>\n> > In v2, int_md5_finish() calls pg_cryptohash_final() with\n> > h->block_size(h) (64) but it should be h->result_size(h)\n> > (16). int_sha1_finish() is wrong the same way. (and, v3 seems fixing\n> > them in the wrong way.)\n>\n> Right. These should just use h->result_size(h), and not\n> h->block_size(h).\n>\n> -extern int scram_HMAC_final(uint8 *result, scram_HMAC_ctx *ctx);\n> +extern int scram_HMAC_final(scram_HMAC_ctx *ctx, uint8 *result);\n> There is no point in this change. You just make back-patching harder\n> while doing nothing about the problem at hand.\n>\nIMO there is no necessity in back-patching.\n\n\n> - if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf) < 0)\n> + if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf,\n> + PG_SHA256_DIGEST_LENGTH) < 0)\n> Here this could just use sizeof(checksumbuf)? This pattern could be\n> used elsewhere as well, like in md5_common.c.\n>\nDone.\n\nAttached a v4 of patch.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 13 Feb 2021 17:37:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "On Sat, Feb 13, 2021 at 05:37:32PM -0300, Ranier Vilela wrote:\n> IMO there is no necessity in back-patching.\n\nYou are missing the point here. What you are proposing here would not\nbe backpatched. However, reusing the same words as upthread, this has\na cost in terms of *future* maintenance. In short, any *future*\npotential bug fix that would require to be backpatched in need of\nusing this function or touching its area would result in a conflict.\nThis changes makes no sense.\n--\nMichael", "msg_date": "Sun, 14 Feb 2021 08:32:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em sáb., 13 de fev. de 2021 às 20:32, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Sat, Feb 13, 2021 at 05:37:32PM -0300, Ranier Vilela wrote:\n> > IMO there is no necessity in back-patching.\n>\n> You are missing the point here. What you are proposing here would not\n> be backpatched. However, reusing the same words as upthread, this has\n> a cost in terms of *future* maintenance. In short, any *future*\n> potential bug fix that would require to be backpatched in need of\n> using this function or touching its area would result in a conflict.\n>\nOk. +1 for back-patching.\n\nAny future maintenance, or use of that functions, need to consult the api.\n\nscram_HMAC_init(scram_HMAC_ctx *ctx, const uint8 *key, int keylen);\nscram_HMAC_update(scram_HMAC_ctx *ctx, const char *str, int slen);\nscram_HMAC_final(uint8 *result, scram_HMAC_ctx *ctx);\n\nSee both \"result\" and \"ctx\" are pointers.\nSomeone can use like this:\n\nscram_HMAC_init(&ctx, key, keylen);\nscram_HMAC_update(&ctx, str, slen);\nscram_HMAC_final(&ctx, result); // parameters wrong order\n\nAnd many compilers won't complain.\n\nregards,\nRanier Vilela\n\nEm sáb., 13 de fev. de 2021 às 20:32, Michael Paquier <michael@paquier.xyz> escreveu:On Sat, Feb 13, 2021 at 05:37:32PM -0300, Ranier Vilela wrote:\n> IMO there is no necessity in back-patching.\n\nYou are missing the point here.  What you are proposing here would not\nbe backpatched.  However, reusing the same words as upthread, this has\na cost in terms of *future* maintenance.  In short, any *future*\npotential bug fix that would require to be backpatched in need of\nusing this function or touching its area would result in a conflict.Ok. +1 for back-patching.Any future maintenance, or use of that functions, need to consult the api.scram_HMAC_init(scram_HMAC_ctx *ctx, const uint8 *key, int keylen);scram_HMAC_update(scram_HMAC_ctx *ctx, const char *str, int slen);scram_HMAC_final(uint8 *result, scram_HMAC_ctx *ctx);See both \"result\" and \"ctx\" are pointers.Someone can use like this:\nscram_HMAC_init(&ctx, key, keylen);scram_HMAC_update(&ctx, str, slen);scram_HMAC_final(&ctx, result); // parameters wrong order\nAnd many compilers won't complain.regards,Ranier Vilela", "msg_date": "Sat, 13 Feb 2021 21:33:48 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "On Sat, Feb 13, 2021 at 09:33:48PM -0300, Ranier Vilela wrote:\n> Em sáb., 13 de fev. de 2021 às 20:32, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n> \n>> You are missing the point here. What you are proposing here would not\n>> be backpatched. However, reusing the same words as upthread, this has\n>> a cost in terms of *future* maintenance. In short, any *future*\n>> potential bug fix that would require to be backpatched in need of\n>> using this function or touching its area would result in a conflict.\n>\n> Ok. +1 for back-patching.\n\nPlease take the time to read again my previous email.\n\nAnd also, please take the time to actually test patches you send,\nbecause the list of things getting broken is impressive. At least\nyou make sure that the internals of cryptohash.c generate an error as\nthey should because of those incorrect sizes :)\n\ngit diff --check complains, in various places.\n\n@@ -330,7 +330,8 @@ SendBackupManifest(backup_manifest_info *manifest)\n \t * twice.\n \t */\n \tmanifest->still_checksumming = false;\n-\tif (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf) < 0)\n+\tif (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf,\n+\t\t\t\t\t\t\tsizeof(checksumbuf) - 1) < 0)\n \t\telog(ERROR, \"failed to finalize checksum of backup manifest\");\nThis breaks backup manifests, due to an incorrect calculation.\n\n@@ -78,7 +78,8 @@ pg_md5_hash(const void *buff, size_t len, char *hexsum)\n \tif (pg_cryptohash_init(ctx) < 0 ||\n \t\tpg_cryptohash_update(ctx, buff, len) < 0 ||\n-\t\tpg_cryptohash_final(ctx, sum) < 0)\n+\t\tpg_cryptohash_final(ctx, sum, \n+\t\t sizeof(sum) - 1) < 0)\nThis one breaks MD5 hashing, due to an incorrect size calculation,\nagain.\n\n@@ -51,7 +51,8 @@ scram_HMAC_init(scram_HMAC_ctx *ctx, const uint8 *key, int keylen)\n \t\t\treturn -1;\n \t\tif (pg_cryptohash_init(sha256_ctx) < 0 ||\n \t\t\tpg_cryptohash_update(sha256_ctx, key, keylen) < 0 ||\n-\t\t\tpg_cryptohash_final(sha256_ctx, keybuf) < 0)\n+\t\t\tpg_cryptohash_final(sha256_ctx, keybuf, \n+\t\t\t sizeof(keybuf) - 1) < 0)\n[...]\n-\tif (pg_cryptohash_final(ctx->sha256ctx, h) < 0)\n+\tif (pg_cryptohash_final(ctx->sha256ctx, h, \n+\t sizeof(h) - 1) < 0)\nThis breaks SCRAM authentication, for the same reason. In three\nplaces.\n\nI think that in pg_checksum_final() we had better save the digest\nlength in \"retval\" before calling pg_cryptohash_final(), and use it\nfor the size passed down.\n\ncontrib/uuid-ossp/ fails to compile.\n\ncontrib/pgcrypto/ fix is incorrect, requiring h->result_size(h) in\nthree places.\n\nI think that as a whole we should try to minimize the number of times\nwe use any DIGEST_LENGTH variable, relying a maximum on sizeof().\nThis v4 is a mixed bad of that. Once you switch to that, there is an\ninteresting result with uuid-ossp, where you can notice that there is\na match between the size of dce_uuid_t and MD5_DIGEST_LENGTH.\n\nNo need to send a new patch, the attached taking care of those\nissues, and it is correctly indented. I'll just look at that again\ntomorrow, it is already late here.\n--\nMichael", "msg_date": "Sun, 14 Feb 2021 20:22:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em dom., 14 de fev. de 2021 às 08:22, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Sat, Feb 13, 2021 at 09:33:48PM -0300, Ranier Vilela wrote:\n> > Em sáb., 13 de fev. de 2021 às 20:32, Michael Paquier <\n> michael@paquier.xyz>\n> > escreveu:\n> >\n> >> You are missing the point here. What you are proposing here would not\n> >> be backpatched. However, reusing the same words as upthread, this has\n> >> a cost in terms of *future* maintenance. In short, any *future*\n> >> potential bug fix that would require to be backpatched in need of\n> >> using this function or touching its area would result in a conflict.\n> >\n> > Ok. +1 for back-patching.\n>\n> Please take the time to read again my previous email.\n>\n> And also, please take the time to actually test patches you send,\n> because the list of things getting broken is impressive. At least\n> you make sure that the internals of cryptohash.c generate an error as\n> they should because of those incorrect sizes :)\n>\n> git diff --check complains, in various places.\n>\n> @@ -330,7 +330,8 @@ SendBackupManifest(backup_manifest_info *manifest)\n> * twice.\n> */\n> manifest->still_checksumming = false;\n> - if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf) < 0)\n> + if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf,\n> +\n> sizeof(checksumbuf) - 1) < 0)\n> elog(ERROR, \"failed to finalize checksum of backup\n> manifest\");\n> This breaks backup manifests, due to an incorrect calculation.\n>\nBad habits.\nsizeof - 1, I use with strings.\n\n\n> I think that in pg_checksum_final() we had better save the digest\n> length in \"retval\" before calling pg_cryptohash_final(), and use it\n> for the size passed down.\n>\npg_checksum_final I would like to see it like this:\n\n case CHECKSUM_TYPE_SHA224:\n retval = PG_SHA224_DIGEST_LENGTH;\n break;\n case CHECKSUM_TYPE_SHA256:\n retval = PG_SHA256_DIGEST_LENGTH;\n break;\n case CHECKSUM_TYPE_SHA384:\n retval = PG_SHA384_DIGEST_LENGTH;\n break;\n case CHECKSUM_TYPE_SHA512:\n retval = PG_SHA512_DIGEST_LENGTH;\n break;\n default:\n return -1;\n }\n\n if (pg_cryptohash_final(context->raw_context.c_sha2,\n output, retval) < 0)\n return -1;\npg_cryptohash_free(context->raw_context.c_sha2);\n\nWhat do you think?\n\nregards,\nRanier Vilela\n\nEm dom., 14 de fev. de 2021 às 08:22, Michael Paquier <michael@paquier.xyz> escreveu:On Sat, Feb 13, 2021 at 09:33:48PM -0300, Ranier Vilela wrote:\n> Em sáb., 13 de fev. de 2021 às 20:32, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n> \n>> You are missing the point here.  What you are proposing here would not\n>> be backpatched.  However, reusing the same words as upthread, this has\n>> a cost in terms of *future* maintenance.  In short, any *future*\n>> potential bug fix that would require to be backpatched in need of\n>> using this function or touching its area would result in a conflict.\n>\n> Ok. +1 for back-patching.\n\nPlease take the time to read again my previous email.\n\nAnd also, please take the time to actually test patches you send,\nbecause the list of things getting broken is impressive.  At least\nyou make sure that the internals of cryptohash.c generate an error as\nthey should because of those incorrect sizes :)\n\ngit diff --check complains, in various places.\n\n@@ -330,7 +330,8 @@ SendBackupManifest(backup_manifest_info *manifest)\n         * twice.\n         */\n        manifest->still_checksumming = false;\n-       if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf) < 0)\n+       if (pg_cryptohash_final(manifest->manifest_ctx, checksumbuf,\n+                                                       sizeof(checksumbuf) - 1) < 0)\n                elog(ERROR, \"failed to finalize checksum of backup manifest\");\nThis breaks backup manifests, due to an incorrect calculation.Bad habits.\nsizeof - 1, I use with strings.\n\n \nI think that in pg_checksum_final() we had better save the digest\nlength in \"retval\" before calling pg_cryptohash_final(), and use it\nfor the size passed down.\npg_checksum_final I would like to see it like this: \t\tcase CHECKSUM_TYPE_SHA224:          retval = PG_SHA224_DIGEST_LENGTH;         \t\t\tbreak; \t\tcase CHECKSUM_TYPE_SHA256:          retval = PG_SHA256_DIGEST_LENGTH;         \t\t\tbreak; \t\tcase CHECKSUM_TYPE_SHA384:         \t\t\tretval = PG_SHA384_DIGEST_LENGTH;         \t\t\tbreak; \t\tcase CHECKSUM_TYPE_SHA512:          retval = PG_SHA512_DIGEST_LENGTH;         \t\t\tbreak;  default:         return -1; \t} if (pg_cryptohash_final(context->raw_context.c_sha2,                                       output, retval) < 0)     return -1; pg_cryptohash_free(context->raw_context.c_sha2);What do you think?\nregards,Ranier Vilela", "msg_date": "Sun, 14 Feb 2021 11:39:47 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "On Sun, Feb 14, 2021 at 11:39:47AM -0300, Ranier Vilela wrote:\n> What do you think?\n\nThat's not a good idea for two reasons:\n1) There is CRC32 to worry about, which relies on a different logic.\n2) It would become easier to miss the new option as compilation would\nnot warn anymore if a new checksum type is added.\n\nI have reviewed my patch this morning, tweaked a comment, and applied\nit.\n--\nMichael", "msg_date": "Mon, 15 Feb 2021 10:28:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" }, { "msg_contents": "Em dom., 14 de fev. de 2021 às 22:28, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Sun, Feb 14, 2021 at 11:39:47AM -0300, Ranier Vilela wrote:\n> > What do you think?\n>\n> That's not a good idea for two reasons:\n> 1) There is CRC32 to worry about, which relies on a different logic.\n> 2) It would become easier to miss the new option as compilation would\n> not warn anymore if a new checksum type is added.\n>\n> I have reviewed my patch this morning, tweaked a comment, and applied\n> it.\n>\nThanks for the commit.\n\nregards,\nRanier Vilela\n\nEm dom., 14 de fev. de 2021 às 22:28, Michael Paquier <michael@paquier.xyz> escreveu:On Sun, Feb 14, 2021 at 11:39:47AM -0300, Ranier Vilela wrote:\n> What do you think?\n\nThat's not a good idea for two reasons:\n1) There is CRC32 to worry about, which relies on a different logic.\n2) It would become easier to miss the new option as compilation would\nnot warn anymore if a new checksum type is added.\n\nI have reviewed my patch this morning, tweaked a comment, and applied\nit.Thanks for the commit.regards,Ranier Vilela", "msg_date": "Mon, 15 Feb 2021 10:58:42 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_cryptohash_final possible out-of-bounds access (per Coverity)" } ]
[ { "msg_contents": "Hi:\n\nThis patch is the first patch in UniqueKey patch series[1], since I need to\nrevise\nthat series many times but the first one would be not that often, so I'd\nlike to\nsubmit this one for review first so that I don't need to maintain it again\nand again.\n\nv1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n\nIntroduce notnullattrs field in RelOptInfo to indicate which attr are not\nnull\nin current query. The not null is judged by checking pg_attribute and\nquery's\nrestrictinfo. The info is only maintained at Base RelOptInfo and Partition's\nRelOptInfo level so far.\n\n\nAny thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi:This patch is the first patch in UniqueKey patch series[1], since I need to revise that series many times but the first one would be not that often, so I'd like tosubmit this one for review first so that I don't need to maintain it again and again.v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patchIntroduce notnullattrs field in RelOptInfo to indicate which attr are not nullin current query. The not null is judged by checking pg_attribute and query'srestrictinfo. The info is only maintained at Base RelOptInfo and Partition'sRelOptInfo level so far.Any thoughts?[1] https://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 10 Feb 2021 11:18:47 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch series)" }, { "msg_contents": "On Wed, Feb 10, 2021 at 11:18 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n>\n> This patch is the first patch in UniqueKey patch series[1], since I need\n> to revise\n> that series many times but the first one would be not that often, so I'd\n> like to\n> submit this one for review first so that I don't need to maintain it again\n> and again.\n>\n> v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n>\n> Introduce notnullattrs field in RelOptInfo to indicate which attr are not\n> null\n> in current query. The not null is judged by checking pg_attribute and\n> query's\n> restrictinfo. The info is only maintained at Base RelOptInfo and\n> Partition's\n> RelOptInfo level so far.\n>\n>\n> Any thoughts?\n>\n> [1]\n> https://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\nAdd the missed patch..\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 10 Feb 2021 11:27:25 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Can this information be part of PathTarget structure and hence part of\nRelOptInfo::reltarget, so that it can be extended to join, group and\nother kinds of RelOptInfo in future? In fact, it might be easy to do\nthat in this patch itself.\n\nOn Wed, Feb 10, 2021 at 8:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> On Wed, Feb 10, 2021 at 11:18 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>> Hi:\n>>\n>> This patch is the first patch in UniqueKey patch series[1], since I need to revise\n>> that series many times but the first one would be not that often, so I'd like to\n>> submit this one for review first so that I don't need to maintain it again and again.\n>>\n>> v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n>>\n>> Introduce notnullattrs field in RelOptInfo to indicate which attr are not null\n>> in current query. The not null is judged by checking pg_attribute and query's\n>> restrictinfo. The info is only maintained at Base RelOptInfo and Partition's\n>> RelOptInfo level so far.\n>>\n>>\n>> Any thoughts?\n>>\n>> [1] https://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n>> --\n>> Best Regards\n>> Andy Fan (https://www.aliyun.com/)\n>\n>\n> Add the missed patch..\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 11 Feb 2021 18:39:45 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> Can this information be part of PathTarget structure and hence part of\n> RelOptInfo::reltarget, so that it can be extended to join, group and\n> other kinds of RelOptInfo in future?\n\nWhy would that be better than keeping it in RelOptInfo?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 09:52:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Wed, 10 Feb 2021 at 16:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n>\n> Introduce notnullattrs field in RelOptInfo to indicate which attr are not null\n> in current query. The not null is judged by checking pg_attribute and query's\n> restrictinfo. The info is only maintained at Base RelOptInfo and Partition's\n> RelOptInfo level so far.\n>\n>\n> Any thoughts?\n\nI'm not that happy with what exactly the definition is of\nRelOptInfo.notnullattrs.\n\nThe comment for the field says:\n+ /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber */\n\nSo you could expect someone to assume that these are a Bitmapset of\nattnums for all columns in the relation marked as NOT NULL. However,\nthat's not true since you use find_nonnullable_vars() to chase down\nquals that filter out NULL values and you mark those too.\n\nThe reason I don't really like this is that it really depends where\nyou want to use RelOptInfo.notnullattrs. If someone wants to use it\nto optimise something before the base quals are evaluated then they\nmight be unhappy that they found some NULLs.\n\nI think you either need to explain in detail what the field means or\nseparate out the two meanings somehow.\n\nDavid\n\n\n", "msg_date": "Fri, 12 Feb 2021 14:02:01 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Thank you all, friends!\n\nOn Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 10 Feb 2021 at 16:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n> >\n> > Introduce notnullattrs field in RelOptInfo to indicate which attr are\n> not null\n> > in current query. The not null is judged by checking pg_attribute and\n> query's\n> > restrictinfo. The info is only maintained at Base RelOptInfo and\n> Partition's\n> > RelOptInfo level so far.\n> >\n> >\n> > Any thoughts?\n>\n> I'm not that happy with what exactly the definition is of\n> RelOptInfo.notnullattrs.\n>\n> The comment for the field says:\n> + /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber */\n>\n> So you could expect someone to assume that these are a Bitmapset of\n> attnums for all columns in the relation marked as NOT NULL. However,\n> that's not true since you use find_nonnullable_vars() to chase down\n> quals that filter out NULL values and you mark those too.\n>\n>\nThe comment might be unclear, but the behavior is on purpose. I want\nto find more cases which can make the attr NOT NULL, all of them are\nuseful for UniqueKey stuff.\n\n\n\n> The reason I don't really like this is that it really depends where\n> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n> to optimise something before the base quals are evaluated then they\n> might be unhappy that they found some NULLs.\n>\n>\nDo you mean the notnullattrs is not set correctly before the base quals are\nevaluated? I think we have lots of data structures which are set just\nafter some\nstage. but notnullattrs is special because it is set at more than 1\nstage. However\nI'm doubtful it is unacceptable, Some fields ever change their meaning at\ndifferent\nstages like Var->varno. If a user has a misunderstanding on it, it\nprobably will find it\nat the testing stage.\n\n\n> I think you either need to explain in detail what the field means or\n> separate out the two meanings somehow.\n>\n>\nAgreed. Besides the not null comes from 2 places (metadata and quals), it\nalso\nmeans it is based on the relation, rather than the RelTarget. for sample:\nA is not\nnull, but SELECT return_null_udf(A) FROM t, return_null_udf is NULL.\nI think\nthis is not well documented as well. How about just change the documents\nas:\n\n1. /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber, the\nNOT NULL\n * comes from pg_attribtes and quals at different planning stages.\n */\n\nor\n\n2. /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber, the\nNOT NULL\n * comes from pg_attribtes and quals at different planning stages. And\nit just means\n * the base attr rather than RelOptInfo->reltarget.\n */\n\nI don't like to separate them into 2 fields because it may make the usage\nharder a\nbit as well.\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nThank you all, friends!On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 10 Feb 2021 at 16:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n>\n> Introduce notnullattrs field in RelOptInfo to indicate which attr are not null\n> in current query. The not null is judged by checking pg_attribute and query's\n> restrictinfo. The info is only maintained at Base RelOptInfo and Partition's\n> RelOptInfo level so far.\n>\n>\n> Any thoughts?\n\nI'm not that happy with what exactly the definition is of\nRelOptInfo.notnullattrs.\n\nThe comment for the field says:\n+ /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber */\n\nSo you could expect someone to assume that these are a Bitmapset of\nattnums for all columns in the relation marked as NOT NULL.  However,\nthat's not true since you use find_nonnullable_vars() to chase down\nquals that filter out NULL values and you mark those too.\nThe comment might be unclear,  but the behavior is on purpose. I wantto find more cases which can make the attr NOT NULL, all of them areuseful for UniqueKey stuff.   \nThe reason I don't really like this is that it really depends where\nyou want to use RelOptInfo.notnullattrs.  If someone wants to use it\nto optimise something before the base quals are evaluated then they\nmight be unhappy that they found some NULLs.\nDo you mean the notnullattrs is not set correctly before the base quals areevaluated?  I think we have lots of data structures which are set just after somestage.  but notnullattrs is special because it is set at more than 1 stage.  HoweverI'm doubtful it is unacceptable, Some fields ever change their meaning at differentstages like Var->varno.  If a user has a misunderstanding on it, it probably will find itat the testing stage.   \nI think you either need to explain in detail what the field means or\nseparate out the two meanings somehow.\nAgreed.   Besides the not null comes from 2 places (metadata and quals), it alsomeans it is based on the relation, rather than the RelTarget.  for sample:  A is notnull,  but SELECT  return_null_udf(A)  FROM t,   return_null_udf is NULL.  I thinkthis is not well documented as well.  How about just change the documents as:1.  /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber, the NOT NULL      * comes from pg_attribtes and quals at different planning stages.      */or2.  /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber, the NOT NULL      * comes from pg_attribtes and quals at different planning stages. And it just means      * the base attr rather than RelOptInfo->reltarget.       */I don't like to separate them into 2 fields because it may make the usage harder abit as well. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Fri, 12 Feb 2021 10:17:48 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Thu, Feb 11, 2021 at 9:09 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Can this information be part of PathTarget structure and hence part of\n> RelOptInfo::reltarget, so that it can be extended to join, group and\n> other kinds of RelOptInfo in future?\n\n\nI think you want to expand this field in a more generic way. For example:\nSELECT udf(a) FROM t WHERE a is not null; In current implementation, I\nonly\nknows a is not null, nothing about if udf(a) is null or not. And we can't\npresent anything\nfor joinrel as well since it is just attno.\n\nAt the same time, looks we can't tell if UDF(A) is null even if the UDF\nis strict and\nA is not null?\n\n\n> In fact, it might be easy to do that in this patch itself.\n>\n\nActually I can't think out the method:)\n\n\n> On Wed, Feb 10, 2021 at 8:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >\n> > On Wed, Feb 10, 2021 at 11:18 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> >>\n> >> Hi:\n> >>\n> >> This patch is the first patch in UniqueKey patch series[1], since I\n> need to revise\n> >> that series many times but the first one would be not that often, so\n> I'd like to\n> >> submit this one for review first so that I don't need to maintain it\n> again and again.\n> >>\n> >> v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n> >>\n> >> Introduce notnullattrs field in RelOptInfo to indicate which attr are\n> not null\n> >> in current query. The not null is judged by checking pg_attribute and\n> query's\n> >> restrictinfo. The info is only maintained at Base RelOptInfo and\n> Partition's\n> >> RelOptInfo level so far.\n> >>\n> >>\n> >> Any thoughts?\n> >>\n> >> [1]\n> https://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n> >> --\n> >> Best Regards\n> >> Andy Fan (https://www.aliyun.com/)\n> >\n> >\n> > Add the missed patch..\n> >\n> > --\n> > Best Regards\n> > Andy Fan (https://www.aliyun.com/)\n>\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Thu, Feb 11, 2021 at 9:09 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Can this information be part of PathTarget structure and hence part of\nRelOptInfo::reltarget, so that it can be extended to join, group and\nother kinds of RelOptInfo in future?I think you want to expand this field in a more generic way.  For example:SELECT udf(a)  FROM t WHERE  a is not null;   In current implementation, I onlyknows a is not null, nothing about if udf(a) is null or not. And we can't present anythingfor joinrel as well since it is just attno. At the same time,  looks we can't tell if UDF(A) is null  even if the UDF is strict and A is not null?    In fact, it might be easy to do that in this patch itself.Actually I can't think out the method:)  \n\nOn Wed, Feb 10, 2021 at 8:57 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n> On Wed, Feb 10, 2021 at 11:18 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>> Hi:\n>>\n>> This patch is the first patch in UniqueKey patch series[1], since I need to revise\n>> that series many times but the first one would be not that often, so I'd like to\n>> submit this one for review first so that I don't need to maintain it again and again.\n>>\n>> v1-0001-Introduce-notnullattrs-field-in-RelOptInfo-to-ind.patch\n>>\n>> Introduce notnullattrs field in RelOptInfo to indicate which attr are not null\n>> in current query. The not null is judged by checking pg_attribute and query's\n>> restrictinfo. The info is only maintained at Base RelOptInfo and Partition's\n>> RelOptInfo level so far.\n>>\n>>\n>> Any thoughts?\n>>\n>> [1] https://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n>> --\n>> Best Regards\n>> Andy Fan (https://www.aliyun.com/)\n>\n>\n> Add the missed patch..\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Fri, 12 Feb 2021 10:31:32 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Thu, Feb 11, 2021 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > Can this information be part of PathTarget structure and hence part of\n> > RelOptInfo::reltarget, so that it can be extended to join, group and\n> > other kinds of RelOptInfo in future?\n>\n> Why would that be better than keeping it in RelOptInfo?\n>\n> regards, tom lane\n\nWe have all the expressions relevant to a given relation (simple,\njoin, group whatever) in Pathtarget. We could remember notnullness of\nattributes of a simple relation in RelOptInfo. But IMO non/nullness of\nthe TLEs of a relation is more useful that attributes and thus\nassociate those in the PathTarget which is used to produce TLEs. That\nway we could use this infra in more general ways.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 12 Feb 2021 13:22:42 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> On Thu, Feb 11, 2021 at 8:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n>>> Can this information be part of PathTarget structure and hence part of\n>>> RelOptInfo::reltarget, so that it can be extended to join, group and\n>>> other kinds of RelOptInfo in future?\n\n>> Why would that be better than keeping it in RelOptInfo?\n\n> We have all the expressions relevant to a given relation (simple,\n> join, group whatever) in Pathtarget. We could remember notnullness of\n> attributes of a simple relation in RelOptInfo. But IMO non/nullness of\n> the TLEs of a relation is more useful that attributes and thus\n> associate those in the PathTarget which is used to produce TLEs. That\n> way we could use this infra in more general ways.\n\nThat argument seems nearly vacuous to me, because for pretty much any\nexpression that isn't a base-relation Var, the answer will have to be\n\"don't know\". Meanwhile, there are clear costs to keeping such info\nin PathTarget, namely having to copy it around. Another issue with\nkeeping it in PathTarget is that I'm not convinced it'll be readily\navailable where you need it: most places that would be interested in\nmaking such proofs are only looking at expression trees.\n\nNow there is one angle that *might* become easier if the info were in\nPathTarget, namely that it could be simpler and more reliable to mark\nnullable output columns of an outer join as being nullable (even if\nthey came from not-null base columns). However, as I've muttered\nabout elsewhere, I'd prefer to deal with that can of worms by altering\nthe representation of the Vars themselves. Again, if you're looking\nat a WHERE clause, it's not real clear how you would find a relevant\nPathTarget.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Feb 2021 10:04:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> The reason I don't really like this is that it really depends where\n>> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n>> to optimise something before the base quals are evaluated then they\n>> might be unhappy that they found some NULLs.\n>>\n>\n> Do you mean the notnullattrs is not set correctly before the base quals are\n> evaluated? I think we have lots of data structures which are set just after some\n> stage. but notnullattrs is special because it is set at more than 1 stage. However\n> I'm doubtful it is unacceptable, Some fields ever change their meaning at different\n> stages like Var->varno. If a user has a misunderstanding on it, it probably will find it\n> at the testing stage.\n\nYou're maybe focusing too much on your use case for notnullattrs. It\nonly cares about NULLs in the result for each query level.\n\n.... thinks of an example...\n\nOK, let's say I decided that COUNT(*) is faster than COUNT(id) so\ndecided that I might like to write a patch which rewrite the query to\nuse COUNT(*) when it was certain that \"id\" could not contain NULLs.\n\nThe query is:\n\nSELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\nJOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n\nsale.saleid is marked as NOT NULL in pg_attribute. As the writer of\nthe patch, I checked the comment for notnullattrs and it says \"Not\nnull attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\nshould be ok to assume since sales.saleid is marked in notnullattrs\nthat I can rewrite the query?!\n\nThe documentation about the RelOptInfo.notnullattrs needs to be clear\nwhat exactly it means. I'm not saying your representation of how to\nrecord NOT NULL in incorrect. I'm saying that you need to be clear\nwhat exactly is being recorded in that field.\n\nIf you want it to mean \"attribute marked here cannot output NULL\nvalues at this query level\", then you should say something along those\nlines.\n\nHowever, having said that, because this is a Bitmapset of\npg_attribute.attnums, it's only possible to record Vars from base\nrelations. It does not seem like you have any means to record\nattributes that are normally NULLable, but cannot produce NULL values\ndue to a strict join qual.\n\ne.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n\nI'd expect the RelOptInfo for t not to contain a bit for the\n\"nullable\" column, but there's no way to record the fact that the join\nRelOptInfo for {t,j} cannot produce a NULL for that column. It might\nbe quite useful to know that for the UniqueKeys patch.\n\nI know there's another discussion here between Ashutosh and Tom about\nPathTarget's and Vars. I had the Var idea too once myself [1] but it\nwas quickly shot down. Tom's reasoning there in [1] seems legit. I\nguess we'd need some sort of planner version of Var and never confuse\nit with the Parse version of Var. That sounds like quite a big\nproject which would have quite a lot of code churn. I'm not sure how\nacceptable it would be to have Var represent both these things. It\ngets complex when you do equal(var1, var2) and expect that to return\ntrue when everything matches apart from the notnull field. We\ncurrently have this issue with the \"location\" field and we even have a\nspecial macro which just ignores those in equalfuncs.c. I imagine not\nmany people would like to expand that to other fields.\n\nIt would be good to agree on the correct representation for Vars that\ncannot produce NULLs so that we don't shut the door on classes of\noptimisation that require something other than what you need for your\ncase.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/14678.1401639369%40sss.pgh.pa.us#d726d397f86755b64bb09d0c487f975f\n\n\n", "msg_date": "Tue, 16 Feb 2021 17:00:50 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >> The reason I don't really like this is that it really depends where\n> >> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n> >> to optimise something before the base quals are evaluated then they\n> >> might be unhappy that they found some NULLs.\n> >>\n> >\n> > Do you mean the notnullattrs is not set correctly before the base quals\n> are\n> > evaluated? I think we have lots of data structures which are set just\n> after some\n> > stage. but notnullattrs is special because it is set at more than 1\n> stage. However\n> > I'm doubtful it is unacceptable, Some fields ever change their meaning\n> at different\n> > stages like Var->varno. If a user has a misunderstanding on it, it\n> probably will find it\n> > at the testing stage.\n>\n> You're maybe focusing too much on your use case for notnullattrs. It\n> only cares about NULLs in the result for each query level.\n>\n> .... thinks of an example...\n>\n> OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n> decided that I might like to write a patch which rewrite the query to\n> use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n>\n> The query is:\n>\n> SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n> JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n>\n> sale.saleid is marked as NOT NULL in pg_attribute. As the writer of\n> the patch, I checked the comment for notnullattrs and it says \"Not\n> null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n> should be ok to assume since sales.saleid is marked in notnullattrs\n> that I can rewrite the query?!\n>\n> The documentation about the RelOptInfo.notnullattrs needs to be clear\n> what exactly it means. I'm not saying your representation of how to\n> record NOT NULL in incorrect. I'm saying that you need to be clear\n> what exactly is being recorded in that field.\n>\n> If you want it to mean \"attribute marked here cannot output NULL\n> values at this query level\", then you should say something along those\n> lines.\n>\n\nI think I get what you mean. You are saying notnullattrs is only correct\nat the *current* stage, namely set_rel_size. It would not be true after\nthat, but the data is still there. That would cause some confusion. I\nadmit\nthat is something I didn't realize before. I checked other fields of\nRelOptInfo,\nlooks no one filed works like this, so I am not really happy with this\ndesign\nnow. I'm OK with saying more things along these lines. That can be done\nwe all understand each other well. Any better design is welcome as well.\nI think the \"Var represents null stuff\" is good, until I see your comments\nbelow.\n\n\n\n> However, having said that, because this is a Bitmapset of\n> pg_attribute.attnums, it's only possible to record Vars from base\n> relations. It does not seem like you have any means to record\n> attributes that are normally NULLable, but cannot produce NULL values\n> due to a strict join qual.\n>\n> e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n>\n> I'd expect the RelOptInfo for t not to contain a bit for the\n> \"nullable\" column, but there's no way to record the fact that the join\n> RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n> be quite useful to know that for the UniqueKeys patch.\n>\n>\nThe current patch can detect t.nullable is not null correctly. That\nis done by find_nonnullable_vars(qual) and deconstruct_recure stage.\n\n\n> I know there's another discussion here between Ashutosh and Tom about\n> PathTarget's and Vars. I had the Var idea too once myself [1] but it\n> was quickly shot down. Tom's reasoning there in [1] seems legit. I\n> guess we'd need some sort of planner version of Var and never confuse\n> it with the Parse version of Var. That sounds like quite a big\n> project which would have quite a lot of code churn. I'm not sure how\n> acceptable it would be to have Var represent both these things. It\n> gets complex when you do equal(var1, var2) and expect that to return\n> true when everything matches apart from the notnull field. We\n> currently have this issue with the \"location\" field and we even have a\n> special macro which just ignores those in equalfuncs.c. I imagine not\n> many people would like to expand that to other fields.\n>\n>\nThanks for sharing this.\n\n\n> It would be good to agree on the correct representation for Vars that\n> cannot produce NULLs so that we don't shut the door on classes of\n> optimisation that require something other than what you need for your\n> case.\n>\n>\nAgreed. The simplest way is just adding some comments. If go a\nstep further, how about just reset the notnullattrs when it is nullable\nlater like outer join? I have added this logic in the attached patch.\n(comment for the notnullattrs is still not touched). I think we only\nneed to handle this in build_join_rel stage. With the v2 commit 2,\nnotnullattrs might be unset too early, but if the value is there, then\nit is correct.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 16 Feb 2021 22:03:46 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Feb 16, 2021 at 10:03 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n>\n>> On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> >\n>> > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n>> wrote:\n>> >> The reason I don't really like this is that it really depends where\n>> >> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n>> >> to optimise something before the base quals are evaluated then they\n>> >> might be unhappy that they found some NULLs.\n>> >>\n>> >\n>> > Do you mean the notnullattrs is not set correctly before the base quals\n>> are\n>> > evaluated? I think we have lots of data structures which are set just\n>> after some\n>> > stage. but notnullattrs is special because it is set at more than 1\n>> stage. However\n>> > I'm doubtful it is unacceptable, Some fields ever change their meaning\n>> at different\n>> > stages like Var->varno. If a user has a misunderstanding on it, it\n>> probably will find it\n>> > at the testing stage.\n>>\n>> You're maybe focusing too much on your use case for notnullattrs. It\n>> only cares about NULLs in the result for each query level.\n>>\n>> .... thinks of an example...\n>>\n>> OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n>> decided that I might like to write a patch which rewrite the query to\n>> use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n>>\n>> The query is:\n>>\n>> SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n>> JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n>>\n>> sale.saleid is marked as NOT NULL in pg_attribute. As the writer of\n>> the patch, I checked the comment for notnullattrs and it says \"Not\n>> null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n>> should be ok to assume since sales.saleid is marked in notnullattrs\n>> that I can rewrite the query?!\n>>\n>> The documentation about the RelOptInfo.notnullattrs needs to be clear\n>> what exactly it means. I'm not saying your representation of how to\n>> record NOT NULL in incorrect. I'm saying that you need to be clear\n>> what exactly is being recorded in that field.\n>>\n>> If you want it to mean \"attribute marked here cannot output NULL\n>> values at this query level\", then you should say something along those\n>> lines.\n>>\n>\n> I think I get what you mean. You are saying notnullattrs is only correct\n> at the *current* stage, namely set_rel_size. It would not be true after\n> that, but the data is still there. That would cause some confusion. I\n> admit\n> that is something I didn't realize before. I checked other fields of\n> RelOptInfo,\n> looks no one filed works like this, so I am not really happy with this\n> design\n> now. I'm OK with saying more things along these lines. That can be done\n> we all understand each other well. Any better design is welcome as well.\n> I think the \"Var represents null stuff\" is good, until I see your\n> comments below.\n>\n>\n>\n>> However, having said that, because this is a Bitmapset of\n>> pg_attribute.attnums, it's only possible to record Vars from base\n>> relations. It does not seem like you have any means to record\n>> attributes that are normally NULLable, but cannot produce NULL values\n>> due to a strict join qual.\n>>\n>> e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n>>\n>> I'd expect the RelOptInfo for t not to contain a bit for the\n>> \"nullable\" column, but there's no way to record the fact that the join\n>> RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n>> be quite useful to know that for the UniqueKeys patch.\n>>\n>>\n> The current patch can detect t.nullable is not null correctly. That\n> is done by find_nonnullable_vars(qual) and deconstruct_recure stage.\n>\n>\n>> I know there's another discussion here between Ashutosh and Tom about\n>> PathTarget's and Vars. I had the Var idea too once myself [1] but it\n>> was quickly shot down. Tom's reasoning there in [1] seems legit. I\n>> guess we'd need some sort of planner version of Var and never confuse\n>> it with the Parse version of Var. That sounds like quite a big\n>> project which would have quite a lot of code churn. I'm not sure how\n>> acceptable it would be to have Var represent both these things. It\n>> gets complex when you do equal(var1, var2) and expect that to return\n>> true when everything matches apart from the notnull field. We\n>> currently have this issue with the \"location\" field and we even have a\n>> special macro which just ignores those in equalfuncs.c. I imagine not\n>> many people would like to expand that to other fields.\n>>\n>>\n> Thanks for sharing this.\n>\n>\n>> It would be good to agree on the correct representation for Vars that\n>> cannot produce NULLs so that we don't shut the door on classes of\n>> optimisation that require something other than what you need for your\n>> case.\n>>\n>>\n> Agreed. The simplest way is just adding some comments. If go a\n> step further, how about just reset the notnullattrs when it is nullable\n> later like outer join? I have added this logic in the attached patch.\n> (comment for the notnullattrs is still not touched). I think we only\n> need to handle this in build_join_rel stage.\n>\n\n ..\n\n> With the v2 commit 2,\n> notnullattrs might be unset too early, but if the value is there, then\n> it is correct.\n>\n>\nThis looks bad as well. How about adding an extra field in RelOptInfo for\nthe\nouter join case. For example:\n\n@@ -710,8 +710,14 @@ typedef struct RelOptInfo\n PlannerInfo *subroot; /* if subquery */\n List *subplan_params; /* if subquery */\n int rel_parallel_workers; /* wanted number of\nparallel workers */\n- /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber */\n+ /*\n+ * Not null attrs, start from -FirstLowInvalidHeapAttributeNumber.\nThe nullness\n+ * might be changed after outer join, So we need to consult with\nleftouter_relids\n+ * before using it.\n+ */\n Bitmapset *notnullattrs;\n+ /* A list of Relids which will be a outer rel when join with this\nrelation. */\n+ List *leftouter_relids;\n\n /* Information about foreign tables and foreign joins */\n Oid serverid; /* identifies\nserver for the table or join */\n\nleftout_relids should be able to be filled with root->join_info_list. If\nwe go with this\ndirection, not sure leftouter_relids should be a List or not since I even\ncan't think\nout a query which can have more than one relids for a relation.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Feb 16, 2021 at 10:03 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> The reason I don't really like this is that it really depends where\n>> you want to use RelOptInfo.notnullattrs.  If someone wants to use it\n>> to optimise something before the base quals are evaluated then they\n>> might be unhappy that they found some NULLs.\n>>\n>\n> Do you mean the notnullattrs is not set correctly before the base quals are\n> evaluated?  I think we have lots of data structures which are set just after some\n> stage.  but notnullattrs is special because it is set at more than 1 stage.  However\n> I'm doubtful it is unacceptable, Some fields ever change their meaning at different\n> stages like Var->varno.  If a user has a misunderstanding on it, it probably will find it\n> at the testing stage.\n\nYou're maybe focusing too much on your use case for notnullattrs. It\nonly cares about NULLs in the result for each query level.\n\n.... thinks of an example...\n\nOK, let's say I decided that COUNT(*) is faster than COUNT(id) so\ndecided that I might like to write a patch which rewrite the query to\nuse COUNT(*) when it was certain that \"id\" could not contain NULLs.\n\nThe query is:\n\nSELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\nJOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n\nsale.saleid is marked as NOT NULL in pg_attribute.  As the writer of\nthe patch, I checked the comment for notnullattrs and it says \"Not\nnull attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\nshould be ok to assume since sales.saleid is marked in notnullattrs\nthat I can rewrite the query?!\n\nThe documentation about the RelOptInfo.notnullattrs needs to be clear\nwhat exactly it means. I'm not saying your representation of how to\nrecord NOT NULL in incorrect. I'm saying that you need to be clear\nwhat exactly is being recorded in that field.\n\nIf you want it to mean \"attribute marked here cannot output NULL\nvalues at this query level\", then you should say something along those\nlines.I think I get what you mean. You are saying notnullattrs is only correctat the *current* stage, namely set_rel_size.  It would not be true afterthat,  but the data is still there.  That would cause some confusion.  I admitthat is something I didn't realize before.  I checked other fields of RelOptInfo,looks no one filed works like this, so I am not really happy with this designnow.  I'm OK with saying more things along these lines.  That can be donewe all understand each other well.  Any better design is welcome as well. I think the  \"Var represents null stuff\" is good, until I see your comments below. \n\nHowever, having said that, because this is a Bitmapset of\npg_attribute.attnums, it's only possible to record Vars from base\nrelations.  It does not seem like you have any means to record\nattributes that are normally NULLable, but cannot produce NULL values\ndue to a strict join qual.\n\ne.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n\nI'd expect the RelOptInfo for t not to contain a bit for the\n\"nullable\" column, but there's no way to record the fact that the join\nRelOptInfo for {t,j} cannot produce a NULL for that column. It might\nbe quite useful to know that for the UniqueKeys patch.\nThe current patch can detect t.nullable is not null correctly. Thatis done by find_nonnullable_vars(qual) and deconstruct_recure stage.  \nI know there's another discussion here between Ashutosh and Tom about\nPathTarget's and Vars.   I had the Var idea too once myself [1] but it\nwas quickly shot down.  Tom's reasoning there in [1] seems legit.  I\nguess we'd need some sort of planner version of Var and never confuse\nit with the Parse version of Var.  That sounds like quite a big\nproject which would have quite a lot of code churn. I'm not sure how\nacceptable it would be to have Var represent both these things.  It\ngets complex when you do equal(var1, var2) and expect that to return\ntrue when everything matches apart from the notnull field. We\ncurrently have this issue with the \"location\" field and we even have a\nspecial macro which just ignores those in equalfuncs.c.  I imagine not\nmany people would like to expand that to other fields.\nThanks for sharing this.  \nIt would be good to agree on the correct representation for Vars that\ncannot produce NULLs so that we don't shut the door on classes of\noptimisation that require something other than what you need for your\ncase.Agreed.   The simplest way is just adding some comments.  If go astep further, how about just reset the notnullattrs when it is nullablelater like outer join?  I have added this logic in the attached patch.(comment for the notnullattrs is still not touched).  I think we onlyneed to handle this in build_join_rel stage.   .. With the v2 commit 2,notnullattrs might be unset too early, but if the value is there, thenit is correct. This looks bad as well.  How about adding an extra field in RelOptInfo for theouter join case.  For example:@@ -710,8 +710,14 @@ typedef struct RelOptInfo        PlannerInfo *subroot;           /* if subquery */        List       *subplan_params; /* if subquery */        int                     rel_parallel_workers;   /* wanted number of parallel workers */-       /* Not null attrs, start from -FirstLowInvalidHeapAttributeNumber */+       /*+        * Not null attrs, start from -FirstLowInvalidHeapAttributeNumber. The nullness+        * might be changed after outer join, So we need to consult with leftouter_relids+        * before using it.+        */        Bitmapset               *notnullattrs;+       /* A list of Relids which will be a outer rel when join with this relation. */+       List    *leftouter_relids;        /* Information about foreign tables and foreign joins */        Oid                     serverid;               /* identifies server for the table or join */leftout_relids should be able to be filled with root->join_info_list.  If we go with thisdirection,  not sure leftouter_relids should be a List or not since I even can't thinkout a query which can have more than one relids for a relation. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 16 Feb 2021 23:01:44 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >> The reason I don't really like this is that it really depends where\n> >> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n> >> to optimise something before the base quals are evaluated then they\n> >> might be unhappy that they found some NULLs.\n> >>\n> >\n> > Do you mean the notnullattrs is not set correctly before the base quals\n> are\n> > evaluated? I think we have lots of data structures which are set just\n> after some\n> > stage. but notnullattrs is special because it is set at more than 1\n> stage. However\n> > I'm doubtful it is unacceptable, Some fields ever change their meaning\n> at different\n> > stages like Var->varno. If a user has a misunderstanding on it, it\n> probably will find it\n> > at the testing stage.\n>\n> You're maybe focusing too much on your use case for notnullattrs. It\n> only cares about NULLs in the result for each query level.\n>\n> .... thinks of an example...\n>\n> OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n> decided that I might like to write a patch which rewrite the query to\n> use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n>\n> The query is:\n>\n> SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n> JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n>\n> sale.saleid is marked as NOT NULL in pg_attribute. As the writer of\n> the patch, I checked the comment for notnullattrs and it says \"Not\n> null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n> should be ok to assume since sales.saleid is marked in notnullattrs\n> that I can rewrite the query?!\n>\n> The documentation about the RelOptInfo.notnullattrs needs to be clear\n> what exactly it means. I'm not saying your representation of how to\n> record NOT NULL in incorrect. I'm saying that you need to be clear\n> what exactly is being recorded in that field.\n>\n> If you want it to mean \"attribute marked here cannot output NULL\n> values at this query level\", then you should say something along those\n> lines.\n>\n> However, having said that, because this is a Bitmapset of\n> pg_attribute.attnums, it's only possible to record Vars from base\n> relations. It does not seem like you have any means to record\n> attributes that are normally NULLable, but cannot produce NULL values\n> due to a strict join qual.\n>\n> e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n>\n> I'd expect the RelOptInfo for t not to contain a bit for the\n> \"nullable\" column, but there's no way to record the fact that the join\n> RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n> be quite useful to know that for the UniqueKeys patch.\n>\n\nI checked again and found I do miss the check on JoinExpr->quals. I have\nfixed it in v3 patch. Thanks for the review!\n\nIn the attached v3, commit 1 is the real patch, and commit 2 is just add\nsome logs to help local testing. notnull.sql/notnull.out is the test case\nfor\nthis patch, both commit 2 and notnull.* are not intended to be committed\nat last.\n\nBesides the above fix in v3, I changed the comments alongs the notnullattrs\nas below and added a true positive helper function is_var_nullable.\n\n+ /*\n+ * Not null attrs, the values are calculated by looking into\npg_attribute and quals\n+ * However both cases are not reliable in some outer join cases. So\nwhen\n+ * we want to check if a Var is nullable, function is_var_nullable\nis a good\n+ * place to start with, which is true positive.\n+ */\n+ Bitmapset *notnullattrs;\n\nI also found separating the two meanings is unnecessary since both of them\nare not reliable in the outer join case and we are just wanting which attr\nis\nnullable, no match how we know it. The below example shows why\nnot-null-by-qual\nis not readable as well (at least with current implementation)\n\ncreate table n1(a int, b int not null);\ncreate table n2(a int, b int not null);\ncreate table n3(a int, b int not null);\n\nselect * from n1 left join n2 on n1.a = n2.a full join n3 on n2.a = n3.a;\n\nIn this case, when we check (n1 left join n2 on n1.a = n2.a) , we know\nn1.a is not nullable. However after full join with n3, it changed.\n\n\n> I know there's another discussion here between Ashutosh and Tom about\n> PathTarget's and Vars. I had the Var idea too once myself [1] but it\n> was quickly shot down. Tom's reasoning there in [1] seems legit. I\n> guess we'd need some sort of planner version of Var and never confuse\n> it with the Parse version of Var. That sounds like quite a big\n> project which would have quite a lot of code churn. I'm not sure how\n> acceptable it would be to have Var represent both these things. It\n> gets complex when you do equal(var1, var2) and expect that to return\n> true when everything matches apart from the notnull field. We\n> currently have this issue with the \"location\" field and we even have a\n> special macro which just ignores those in equalfuncs.c. I imagine not\n> many people would like to expand that to other fields.\n>\n> It would be good to agree on the correct representation for Vars that\n> cannot produce NULLs so that we don't shut the door on classes of\n> optimisation that require something other than what you need for your\n> case.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/14678.1401639369%40sss.pgh.pa.us#d726d397f86755b64bb09d0c487f975f\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Thu, 18 Feb 2021 20:58:13 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "> On Thu, Feb 18, 2021 at 08:58:13PM +0800, Andy Fan wrote:\n\nThanks for continuing work on this patch!\n\n> On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> > On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n> > wrote:\n> > >> The reason I don't really like this is that it really depends where\n> > >> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n> > >> to optimise something before the base quals are evaluated then they\n> > >> might be unhappy that they found some NULLs.\n> > >>\n> > >\n> > > Do you mean the notnullattrs is not set correctly before the base quals\n> > are\n> > > evaluated? I think we have lots of data structures which are set just\n> > after some\n> > > stage. but notnullattrs is special because it is set at more than 1\n> > stage. However\n> > > I'm doubtful it is unacceptable, Some fields ever change their meaning\n> > at different\n> > > stages like Var->varno. If a user has a misunderstanding on it, it\n> > probably will find it\n> > > at the testing stage.\n> >\n> > You're maybe focusing too much on your use case for notnullattrs. It\n> > only cares about NULLs in the result for each query level.\n> >\n> > .... thinks of an example...\n> >\n> > OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n> > decided that I might like to write a patch which rewrite the query to\n> > use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n> >\n> > The query is:\n> >\n> > SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n> > JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n> >\n> > sale.saleid is marked as NOT NULL in pg_attribute. As the writer of\n> > the patch, I checked the comment for notnullattrs and it says \"Not\n> > null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n> > should be ok to assume since sales.saleid is marked in notnullattrs\n> > that I can rewrite the query?!\n> >\n> > The documentation about the RelOptInfo.notnullattrs needs to be clear\n> > what exactly it means. I'm not saying your representation of how to\n> > record NOT NULL in incorrect. I'm saying that you need to be clear\n> > what exactly is being recorded in that field.\n> >\n> > If you want it to mean \"attribute marked here cannot output NULL\n> > values at this query level\", then you should say something along those\n> > lines.\n> >\n> > However, having said that, because this is a Bitmapset of\n> > pg_attribute.attnums, it's only possible to record Vars from base\n> > relations. It does not seem like you have any means to record\n> > attributes that are normally NULLable, but cannot produce NULL values\n> > due to a strict join qual.\n> >\n> > e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n> >\n> > I'd expect the RelOptInfo for t not to contain a bit for the\n> > \"nullable\" column, but there's no way to record the fact that the join\n> > RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n> > be quite useful to know that for the UniqueKeys patch.\n> >\n>\n> I checked again and found I do miss the check on JoinExpr->quals. I have\n> fixed it in v3 patch. Thanks for the review!\n>\n> In the attached v3, commit 1 is the real patch, and commit 2 is just add\n> some logs to help local testing. notnull.sql/notnull.out is the test case\n> for\n> this patch, both commit 2 and notnull.* are not intended to be committed\n> at last.\n\nJust to clarify, this version of notnullattrs here is the latest one,\nand another one from \"UniqueKey on Partitioned table\" thread should be\ndisregarded?\n\n> Besides the above fix in v3, I changed the comments alongs the notnullattrs\n> as below and added a true positive helper function is_var_nullable.\n\nWith \"true positive\" you mean it will always correctly say if a Var is\nnullable or not? I'm not sure about this, but couldn't be there still\nsome cases when a Var belongs to nullable_baserels, but still has some\nconstraints preventing it from being nullable (e.g. a silly example when\nthe not nullable column belong to the table, and the query does full\njoin of this table on itself using this column)?\n\nIs this function necessary for the following patches? I've got an\nimpression that the discussion in this thread was mostly evolving about\ncorrect description when notnullattrs could be used, not making it\nbullet proof.\n\n> Bitmapset *notnullattrs;\n\nIt looks like RelOptInfo has its own out function _outRelOptInfo,\nprobably the notnullattrs should be also present there as BITMAPSET_FIELD?\n\nAs a side note, I've attached those two new threads to CF item [1],\nhopefully it's correct.\n\n[1]: https://commitfest.postgresql.org/32/2433/\n\n\n", "msg_date": "Thu, 4 Mar 2021 17:03:11 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Fri, Mar 5, 2021 at 12:00 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Thu, Feb 18, 2021 at 08:58:13PM +0800, Andy Fan wrote:\n>\n> Thanks for continuing work on this patch!\n>\n> > On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >\n> > > On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > > >\n> > > > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n> > > wrote:\n> > > >> The reason I don't really like this is that it really depends where\n> > > >> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n> > > >> to optimise something before the base quals are evaluated then they\n> > > >> might be unhappy that they found some NULLs.\n> > > >>\n> > > >\n> > > > Do you mean the notnullattrs is not set correctly before the base\n> quals\n> > > are\n> > > > evaluated? I think we have lots of data structures which are set\n> just\n> > > after some\n> > > > stage. but notnullattrs is special because it is set at more than 1\n> > > stage. However\n> > > > I'm doubtful it is unacceptable, Some fields ever change their\n> meaning\n> > > at different\n> > > > stages like Var->varno. If a user has a misunderstanding on it, it\n> > > probably will find it\n> > > > at the testing stage.\n> > >\n> > > You're maybe focusing too much on your use case for notnullattrs. It\n> > > only cares about NULLs in the result for each query level.\n> > >\n> > > .... thinks of an example...\n> > >\n> > > OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n> > > decided that I might like to write a patch which rewrite the query to\n> > > use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n> > >\n> > > The query is:\n> > >\n> > > SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n> > > JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n> > >\n> > > sale.saleid is marked as NOT NULL in pg_attribute. As the writer of\n> > > the patch, I checked the comment for notnullattrs and it says \"Not\n> > > null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n> > > should be ok to assume since sales.saleid is marked in notnullattrs\n> > > that I can rewrite the query?!\n> > >\n> > > The documentation about the RelOptInfo.notnullattrs needs to be clear\n> > > what exactly it means. I'm not saying your representation of how to\n> > > record NOT NULL in incorrect. I'm saying that you need to be clear\n> > > what exactly is being recorded in that field.\n> > >\n> > > If you want it to mean \"attribute marked here cannot output NULL\n> > > values at this query level\", then you should say something along those\n> > > lines.\n> > >\n> > > However, having said that, because this is a Bitmapset of\n> > > pg_attribute.attnums, it's only possible to record Vars from base\n> > > relations. It does not seem like you have any means to record\n> > > attributes that are normally NULLable, but cannot produce NULL values\n> > > due to a strict join qual.\n> > >\n> > > e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n> > >\n> > > I'd expect the RelOptInfo for t not to contain a bit for the\n> > > \"nullable\" column, but there's no way to record the fact that the join\n> > > RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n> > > be quite useful to know that for the UniqueKeys patch.\n> > >\n> >\n> > I checked again and found I do miss the check on JoinExpr->quals. I have\n> > fixed it in v3 patch. Thanks for the review!\n> >\n> > In the attached v3, commit 1 is the real patch, and commit 2 is just add\n> > some logs to help local testing. notnull.sql/notnull.out is the test\n> case\n> > for\n> > this patch, both commit 2 and notnull.* are not intended to be committed\n> > at last.\n>\n> Just to clarify, this version of notnullattrs here is the latest one,\n> and another one from \"UniqueKey on Partitioned table\" thread should be\n> disregarded?\n>\n\nActually they are different sections for UniqueKey. Since I don't want to\nmess\ntwo topics in one place, I open another thread. The topic here is how to\nrepresent\na not null attribute, which is a precondition for all UniqueKey stuff. The\nthread\n\" UniqueKey on Partitioned table[1] \" is talking about how to maintain the\nUniqueKey on a partitioned table only.\n\n\n>\n> > Besides the above fix in v3, I changed the comments alongs the\n> notnullattrs\n> > as below and added a true positive helper function is_var_nullable.\n>\n> With \"true positive\" you mean it will always correctly say if a Var is\n> nullable or not?\n\n\nnot null. If I say it is not null (return value is false), it is not null\nfor sure. If\nit is nullable (true), it may be still not null for some stages. But I\ndon't want\nto distinguish them too much, so I just say it is nullable.\n\n\n> I'm not sure about this, but couldn't be there still\n> some cases when a Var belongs to nullable_baserels, but still has some\n> constraints preventing it from being nullable (e.g. a silly example when\n> the not nullable column belong to the table, and the query does full\n> join of this table on itself using this column)?\n>\n> Do you say something like \"SELECT * FROM t1 left join t2 on t1.a = t2.a\nWHERE\nt2.b = 3; \"? In this case, the outer join will be reduced to inner join\nat\nreduce_outer_join stage, which means t2 will not be shown in\nnullable_baserels.\n\n\n> Is this function necessary for the following patches? I've got an\n> impression that the discussion in this thread was mostly evolving about\n> correct description when notnullattrs could be used, not making it\n> bullet proof.\n>\n\nExactly, that is the blocker issue right now. I hope more authorities can\ngive\nsome suggestions to move on.\n\n\n> > Bitmapset *notnullattrs;\n>\n> It looks like RelOptInfo has its own out function _outRelOptInfo,\n> probably the notnullattrs should be also present there as BITMAPSET_FIELD?\n>\n>\nYes, it should be added.\n\n\n> As a side note, I've attached those two new threads to CF item [1],\n> hopefully it's correct.\n>\n> [1]: https://commitfest.postgresql.org/32/2433/\n>\n\nThanks for doing that. It is correct.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWrU35c9g3cE15JmVwh6B2Hzf4hf7cZUkRsiktv7AKR3Ag@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Fri, Mar 5, 2021 at 12:00 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Thu, Feb 18, 2021 at 08:58:13PM +0800, Andy Fan wrote:\n\nThanks for continuing work on this patch!\n\n> On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> > On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n> > wrote:\n> > >> The reason I don't really like this is that it really depends where\n> > >> you want to use RelOptInfo.notnullattrs.  If someone wants to use it\n> > >> to optimise something before the base quals are evaluated then they\n> > >> might be unhappy that they found some NULLs.\n> > >>\n> > >\n> > > Do you mean the notnullattrs is not set correctly before the base quals\n> > are\n> > > evaluated?  I think we have lots of data structures which are set just\n> > after some\n> > > stage.  but notnullattrs is special because it is set at more than 1\n> > stage.  However\n> > > I'm doubtful it is unacceptable, Some fields ever change their meaning\n> > at different\n> > > stages like Var->varno.  If a user has a misunderstanding on it, it\n> > probably will find it\n> > > at the testing stage.\n> >\n> > You're maybe focusing too much on your use case for notnullattrs. It\n> > only cares about NULLs in the result for each query level.\n> >\n> > .... thinks of an example...\n> >\n> > OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n> > decided that I might like to write a patch which rewrite the query to\n> > use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n> >\n> > The query is:\n> >\n> > SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n> > JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n> >\n> > sale.saleid is marked as NOT NULL in pg_attribute.  As the writer of\n> > the patch, I checked the comment for notnullattrs and it says \"Not\n> > null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n> > should be ok to assume since sales.saleid is marked in notnullattrs\n> > that I can rewrite the query?!\n> >\n> > The documentation about the RelOptInfo.notnullattrs needs to be clear\n> > what exactly it means. I'm not saying your representation of how to\n> > record NOT NULL in incorrect. I'm saying that you need to be clear\n> > what exactly is being recorded in that field.\n> >\n> > If you want it to mean \"attribute marked here cannot output NULL\n> > values at this query level\", then you should say something along those\n> > lines.\n> >\n> > However, having said that, because this is a Bitmapset of\n> > pg_attribute.attnums, it's only possible to record Vars from base\n> > relations.  It does not seem like you have any means to record\n> > attributes that are normally NULLable, but cannot produce NULL values\n> > due to a strict join qual.\n> >\n> > e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n> >\n> > I'd expect the RelOptInfo for t not to contain a bit for the\n> > \"nullable\" column, but there's no way to record the fact that the join\n> > RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n> > be quite useful to know that for the UniqueKeys patch.\n> >\n>\n> I checked again and found I do miss the check on JoinExpr->quals.  I have\n> fixed it in v3 patch. Thanks for the review!\n>\n> In the attached v3,  commit 1 is the real patch, and commit 2 is just add\n> some logs to help local testing.  notnull.sql/notnull.out is the test case\n> for\n> this patch, both commit 2 and notnull.* are not intended to be committed\n> at last.\n\nJust to clarify, this version of notnullattrs here is the latest one,\nand another one from \"UniqueKey on Partitioned table\" thread should be\ndisregarded?Actually they are different sections for UniqueKey.  Since I don't want to messtwo topics in one place, I open another thread.  The topic here is how to representa not null attribute, which is a precondition for all UniqueKey stuff.  The thread\" UniqueKey on Partitioned table[1] \" is talking about how to maintain theUniqueKey on a partitioned table only.   \n\n> Besides the above fix in v3, I changed the comments alongs the notnullattrs\n> as below and added a true positive helper function is_var_nullable.\n\nWith \"true positive\" you mean it will always correctly say if a Var is\nnullable or not? not null.  If I say it is not null (return value is false), it is not null for sure.   Ifit is nullable (true),  it may be still not null for some stages.  But I don't wantto distinguish them too much, so I just say it is nullable.  I'm not sure about this, but couldn't be there still\nsome cases when a Var belongs to nullable_baserels, but still has some\nconstraints preventing it from being nullable (e.g. a silly example when\nthe not nullable column belong to the table, and the query does full\njoin of this table on itself using this column)?\nDo you say something like \"SELECT * FROM t1 left join t2 on t1.a = t2.a WHEREt2.b = 3; \"?   In this case, the outer join will be reduced to inner join at reduce_outer_join stage, which means t2 will not be shown in nullable_baserels.  \nIs this function necessary for the following patches? I've got an\nimpression that the discussion in this thread was mostly evolving about\ncorrect description when notnullattrs could be used, not making it\nbullet proof.Exactly, that is the blocker issue right now. I hope more authorities can givesome suggestions to move on. \n\n>   Bitmapset       *notnullattrs;\n\nIt looks like RelOptInfo has its own out function _outRelOptInfo,\nprobably the notnullattrs should be also present there as BITMAPSET_FIELD?\nYes, it should be added.  \nAs a side note, I've attached those two new threads to CF item [1],\nhopefully it's correct.\n\n[1]: https://commitfest.postgresql.org/32/2433/\nThanks for doing that.  It is correct. [1] https://www.postgresql.org/message-id/CAKU4AWrU35c9g3cE15JmVwh6B2Hzf4hf7cZUkRsiktv7AKR3Ag@mail.gmail.com -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Fri, 5 Mar 2021 10:22:45 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "> On Fri, Mar 05, 2021 at 10:22:45AM +0800, Andy Fan wrote:\n> > > I checked again and found I do miss the check on JoinExpr->quals. I have\n> > > fixed it in v3 patch. Thanks for the review!\n> > >\n> > > In the attached v3, commit 1 is the real patch, and commit 2 is just add\n> > > some logs to help local testing. notnull.sql/notnull.out is the test\n> > case\n> > > for\n> > > this patch, both commit 2 and notnull.* are not intended to be committed\n> > > at last.\n> >\n> > Just to clarify, this version of notnullattrs here is the latest one,\n> > and another one from \"UniqueKey on Partitioned table\" thread should be\n> > disregarded?\n> >\n>\n> Actually they are different sections for UniqueKey. Since I don't want to\n> mess\n> two topics in one place, I open another thread. The topic here is how to\n> represent\n> a not null attribute, which is a precondition for all UniqueKey stuff. The\n> thread\n> \" UniqueKey on Partitioned table[1] \" is talking about how to maintain the\n> UniqueKey on a partitioned table only.\n\nSure, those two threads are addressing different topics. But [1] also\nincludes the patch for notnullattrs (I guess it's the same as one of the\nolder versions from this thread), so it would be good to specify which\none should be used to avoid any confusion.\n\n> > I'm not sure about this, but couldn't be there still\n> > some cases when a Var belongs to nullable_baserels, but still has some\n> > constraints preventing it from being nullable (e.g. a silly example when\n> > the not nullable column belong to the table, and the query does full\n> > join of this table on itself using this column)?\n> >\n> > Do you say something like \"SELECT * FROM t1 left join t2 on t1.a = t2.a\n> WHERE\n> t2.b = 3; \"? In this case, the outer join will be reduced to inner join\n> at\n> reduce_outer_join stage, which means t2 will not be shown in\n> nullable_baserels.\n\nNope, as I said it's a bit useless example of full self join t1 on\nitself. In this case not null column \"a\" will be considered as nullable,\nbut following your description for is_var_nullable it's fine (although\ncouple of commentaries to this function are clearly necessary).\n\n> > Is this function necessary for the following patches? I've got an\n> > impression that the discussion in this thread was mostly evolving about\n> > correct description when notnullattrs could be used, not making it\n> > bullet proof.\n> >\n>\n> Exactly, that is the blocker issue right now. I hope more authorities can\n> give\n> some suggestions to move on.\n\nHm...why essentially a documentation question is the blocker? Or if you\nmean it's a question of the patch scope, are there any arguments for\nextending it?\n\n\n", "msg_date": "Fri, 5 Mar 2021 09:19:00 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Fri, Mar 5, 2021 at 4:16 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Fri, Mar 05, 2021 at 10:22:45AM +0800, Andy Fan wrote:\n> > > > I checked again and found I do miss the check on JoinExpr->quals. I\n> have\n> > > > fixed it in v3 patch. Thanks for the review!\n> > > >\n> > > > In the attached v3, commit 1 is the real patch, and commit 2 is\n> just add\n> > > > some logs to help local testing. notnull.sql/notnull.out is the test\n> > > case\n> > > > for\n> > > > this patch, both commit 2 and notnull.* are not intended to be\n> committed\n> > > > at last.\n> > >\n> > > Just to clarify, this version of notnullattrs here is the latest one,\n> > > and another one from \"UniqueKey on Partitioned table\" thread should be\n> > > disregarded?\n> > >\n> >\n> > Actually they are different sections for UniqueKey. Since I don't want\n> to\n> > mess\n> > two topics in one place, I open another thread. The topic here is how to\n> > represent\n> > a not null attribute, which is a precondition for all UniqueKey stuff.\n> The\n> > thread\n> > \" UniqueKey on Partitioned table[1] \" is talking about how to maintain\n> the\n> > UniqueKey on a partitioned table only.\n>\n> Sure, those two threads are addressing different topics. But [1] also\n> includes the patch for notnullattrs (I guess it's the same as one of the\n> older versions from this thread), so it would be good to specify which\n> one should be used to avoid any confusion.\n>\n> > > I'm not sure about this, but couldn't be there still\n> > > some cases when a Var belongs to nullable_baserels, but still has some\n> > > constraints preventing it from being nullable (e.g. a silly example\n> when\n> > > the not nullable column belong to the table, and the query does full\n> > > join of this table on itself using this column)?\n> > >\n> > > Do you say something like \"SELECT * FROM t1 left join t2 on t1.a = t2.a\n> > WHERE\n> > t2.b = 3; \"? In this case, the outer join will be reduced to inner join\n> > at\n> > reduce_outer_join stage, which means t2 will not be shown in\n> > nullable_baserels.\n>\n> Nope, as I said it's a bit useless example of full self join t1 on\n> itself. In this case not null column \"a\" will be considered as nullable,\n> but following your description for is_var_nullable it's fine (although\n> couple of commentaries to this function are clearly necessary).\n>\n> > > Is this function necessary for the following patches? I've got an\n> > > impression that the discussion in this thread was mostly evolving about\n> > > correct description when notnullattrs could be used, not making it\n> > > bullet proof.\n> > >\n> >\n> > Exactly, that is the blocker issue right now. I hope more authorities can\n> > give\n> > some suggestions to move on.\n>\n> Hm...why essentially a documentation question is the blocker? Or if you\n> mean it's a question of the patch scope, are there any arguments for\n> extending it?\n>\n\nI treat the below comment as the blocker issue:\n\n> It would be good to agree on the correct representation for Vars that\n> cannot produce NULLs so that we don't shut the door on classes of\n> optimisation that require something other than what you need for your\n> case.\n\nDavid/Tom/Ashutosh, do you mind to share more insights to this?\nI mean the target is the patch is in a committable state.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Fri, Mar 5, 2021 at 4:16 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Fri, Mar 05, 2021 at 10:22:45AM +0800, Andy Fan wrote:\n> > > I checked again and found I do miss the check on JoinExpr->quals.  I have\n> > > fixed it in v3 patch. Thanks for the review!\n> > >\n> > > In the attached v3,  commit 1 is the real patch, and commit 2 is just add\n> > > some logs to help local testing.  notnull.sql/notnull.out is the test\n> > case\n> > > for\n> > > this patch, both commit 2 and notnull.* are not intended to be committed\n> > > at last.\n> >\n> > Just to clarify, this version of notnullattrs here is the latest one,\n> > and another one from \"UniqueKey on Partitioned table\" thread should be\n> > disregarded?\n> >\n>\n> Actually they are different sections for UniqueKey.  Since I don't want to\n> mess\n> two topics in one place, I open another thread.  The topic here is how to\n> represent\n> a not null attribute, which is a precondition for all UniqueKey stuff.  The\n> thread\n> \" UniqueKey on Partitioned table[1] \" is talking about how to maintain the\n> UniqueKey on a partitioned table only.\n\nSure, those two threads are addressing different topics. But [1] also\nincludes the patch for notnullattrs (I guess it's the same as one of the\nolder versions from this thread), so it would be good to specify which\none should be used to avoid any confusion.\n\n> > I'm not sure about this, but couldn't be there still\n> > some cases when a Var belongs to nullable_baserels, but still has some\n> > constraints preventing it from being nullable (e.g. a silly example when\n> > the not nullable column belong to the table, and the query does full\n> > join of this table on itself using this column)?\n> >\n> > Do you say something like \"SELECT * FROM t1 left join t2 on t1.a = t2.a\n> WHERE\n> t2.b = 3; \"?   In this case, the outer join will be reduced to inner join\n> at\n> reduce_outer_join stage, which means t2 will not be shown in\n> nullable_baserels.\n\nNope, as I said it's a bit useless example of full self join t1 on\nitself. In this case not null column \"a\" will be considered as nullable,\nbut following your description for is_var_nullable it's fine (although\ncouple of commentaries to this function are clearly necessary).\n\n> > Is this function necessary for the following patches? I've got an\n> > impression that the discussion in this thread was mostly evolving about\n> > correct description when notnullattrs could be used, not making it\n> > bullet proof.\n> >\n>\n> Exactly, that is the blocker issue right now. I hope more authorities can\n> give\n> some suggestions to move on.\n\nHm...why essentially a documentation question is the blocker? Or if you\nmean it's a question of the patch scope, are there any arguments for\nextending it?\nI treat the below comment as the blocker issue: > It would be good to agree on the correct representation for Vars that> cannot produce NULLs so that we don't shut the door on classes of> optimisation that require something other than what you need for your> case.David/Tom/Ashutosh,  do you mind to share more insights to this?I mean the target is the patch is in a committable state. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sat, 6 Mar 2021 06:45:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >> The reason I don't really like this is that it really depends where\n> >> you want to use RelOptInfo.notnullattrs. If someone wants to use it\n> >> to optimise something before the base quals are evaluated then they\n> >> might be unhappy that they found some NULLs.\n> >>\n> >\n> > Do you mean the notnullattrs is not set correctly before the base quals\n> are\n> > evaluated? I think we have lots of data structures which are set just\n> after some\n> > stage. but notnullattrs is special because it is set at more than 1\n> stage. However\n> > I'm doubtful it is unacceptable, Some fields ever change their meaning\n> at different\n> > stages like Var->varno. If a user has a misunderstanding on it, it\n> probably will find it\n> > at the testing stage.\n>\n> You're maybe focusing too much on your use case for notnullattrs. It\n> only cares about NULLs in the result for each query level.\n>\n> .... thinks of an example...\n>\n> OK, let's say I decided that COUNT(*) is faster than COUNT(id) so\n> decided that I might like to write a patch which rewrite the query to\n> use COUNT(*) when it was certain that \"id\" could not contain NULLs.\n>\n> The query is:\n>\n> SELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\n> JOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n>\n> sale.saleid is marked as NOT NULL in pg_attribute. As the writer of\n> the patch, I checked the comment for notnullattrs and it says \"Not\n> null attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\n> should be ok to assume since sales.saleid is marked in notnullattrs\n> that I can rewrite the query?!\n>\n> The documentation about the RelOptInfo.notnullattrs needs to be clear\n> what exactly it means. I'm not saying your representation of how to\n> record NOT NULL in incorrect. I'm saying that you need to be clear\n> what exactly is being recorded in that field.\n>\n> If you want it to mean \"attribute marked here cannot output NULL\n> values at this query level\", then you should say something along those\n> lines.\n>\n> However, having said that, because this is a Bitmapset of\n> pg_attribute.attnums, it's only possible to record Vars from base\n> relations. It does not seem like you have any means to record\n> attributes that are normally NULLable, but cannot produce NULL values\n> due to a strict join qual.\n>\n> e.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n>\n> I'd expect the RelOptInfo for t not to contain a bit for the\n> \"nullable\" column, but there's no way to record the fact that the join\n> RelOptInfo for {t,j} cannot produce a NULL for that column. It might\n> be quite useful to know that for the UniqueKeys patch.\n>\n>\nI read your comments again and find I miss your point before. So I'd\nsummarize\nthe my current understanding to make sure we are in the same place for\nfurther\nworking.\n\nI want to define a notnullattrs on RelOptInfo struct. The not nullable\nmay comes from catalog definition or quals on the given query. For example:\n\nCREATE TABLE t(a INT NOT NULL, nullable INT);\nSELECT * FROM t; ==> a is not null for sure by definition.\nSELECT * FROM t WHERE nullable > 3; ==> nullable is not null as well by\nqual.\n\nHowever the thing becomes complex with the below 2 cases.\n\n1. SELECT * FROM t INNER JOIN j on t.nullable = q.b;\nWe know t.b will not be null **finally**. But the current plan may something\nlike this:\n\n QUERY PLAN\n------------------------------------------\n Merge Join\n Merge Cond: (t.nullable = j.something)\n -> Sort\n Sort Key: t.nullable\n -> Seq Scan on t\n -> Sort\n Sort Key: j.something\n -> Seq Scan on j\n(8 rows)\n\nwhich means the Path \"Seq Scan on t\" still contains some null values. At\nleast,\nwe should not assume t.nullable is \"not nullable\" the base relation stage.\n\n2. SELECT t.a FROM j LEFT JOIN t ON t.b = t.a;\nEven the t.a is not null by definition, but it may have null **finally**\ndue to\nthe outer join.\n\nMy current patch doesn't handle the 2 cases well since t.nullable is marked\nas\nNOT NULL for both cases.\n\n\n> I know there's another discussion here between Ashutosh and Tom about\n> PathTarget's and Vars. I had the Var idea too once myself [1] but it\n> was quickly shot down. Tom's reasoning there in [1] seems legit. I\n> guess we'd need some sort of planner version of Var and never confuse\n> it with the Parse version of Var. That sounds like quite a big\n> project which would have quite a lot of code churn. I'm not sure how\n> acceptable it would be to have Var represent both these things. It\n> gets complex when you do equal(var1, var2) and expect that to return\n> true when everything matches apart from the notnull field. We\n> currently have this issue with the \"location\" field and we even have a\n> special macro which just ignores those in equalfuncs.c. I imagine not\n> many people would like to expand that to other fields.\n>\n> It would be good to agree on the correct representation for Vars that\n> cannot produce NULLs so that we don't shut the door on classes of\n> optimisation that require something other than what you need for your\n> case.\n>\n>\nLooks we have to maintain not null on the general RelOptInfo level rather\nthan Base\nRelOptInfo. But I don't want to teach Var about the notnull so far. The\nreasons are: 1).\nWe need to maintain the Planner version and Parser version due to the VIEW\ncase.\n2). We have to ignore the extra part for equal(Var, Var) . 3). Var is\nusually shared among\ndifferent RelOptInfo. which means we have to maintain different copies for\nthis purpose IIUC.\n\nI assume we want to know if a Var is nullable with a function like.\nis_var_notnullable(Var *var, Relids relids). If so, we can define the\ndata as below:\n\nstruct RelOptInfo {\n\nBitmapset** notnullattrs;\n..\n};\n\nAfter this we can implement the function as:\n\nbool\nis_var_notnullable(Var* var, Relids relids)\n{\n RelOptInfo *rel = find_rel_by_relids(reldis);\n return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);\n}\n\nProbably we can make some hackers to reduce the notnullattrs's memory usage\noverhead.\n\nAny thoughts?\n\n\n\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/14678.1401639369%40sss.pgh.pa.us#d726d397f86755b64bb09d0c487f975f\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Feb 16, 2021 at 12:01 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 12 Feb 2021 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Fri, Feb 12, 2021 at 9:02 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> The reason I don't really like this is that it really depends where\n>> you want to use RelOptInfo.notnullattrs.  If someone wants to use it\n>> to optimise something before the base quals are evaluated then they\n>> might be unhappy that they found some NULLs.\n>>\n>\n> Do you mean the notnullattrs is not set correctly before the base quals are\n> evaluated?  I think we have lots of data structures which are set just after some\n> stage.  but notnullattrs is special because it is set at more than 1 stage.  However\n> I'm doubtful it is unacceptable, Some fields ever change their meaning at different\n> stages like Var->varno.  If a user has a misunderstanding on it, it probably will find it\n> at the testing stage.\n\nYou're maybe focusing too much on your use case for notnullattrs. It\nonly cares about NULLs in the result for each query level.\n\n.... thinks of an example...\n\nOK, let's say I decided that COUNT(*) is faster than COUNT(id) so\ndecided that I might like to write a patch which rewrite the query to\nuse COUNT(*) when it was certain that \"id\" could not contain NULLs.\n\nThe query is:\n\nSELECT p.partid, p.partdesc,COUNT(s.saleid) FROM part p LEFT OUTER\nJOIN sales s ON p.partid = s.partid GROUP BY p.partid;\n\nsale.saleid is marked as NOT NULL in pg_attribute.  As the writer of\nthe patch, I checked the comment for notnullattrs and it says \"Not\nnull attrs, start from -FirstLowInvalidHeapAttributeNumber\", so I\nshould be ok to assume since sales.saleid is marked in notnullattrs\nthat I can rewrite the query?!\n\nThe documentation about the RelOptInfo.notnullattrs needs to be clear\nwhat exactly it means. I'm not saying your representation of how to\nrecord NOT NULL in incorrect. I'm saying that you need to be clear\nwhat exactly is being recorded in that field.\n\nIf you want it to mean \"attribute marked here cannot output NULL\nvalues at this query level\", then you should say something along those\nlines.\n\nHowever, having said that, because this is a Bitmapset of\npg_attribute.attnums, it's only possible to record Vars from base\nrelations.  It does not seem like you have any means to record\nattributes that are normally NULLable, but cannot produce NULL values\ndue to a strict join qual.\n\ne.g: SELECT t.nullable FROM t INNER JOIN j ON t.nullable = j.something;\n\nI'd expect the RelOptInfo for t not to contain a bit for the\n\"nullable\" column, but there's no way to record the fact that the join\nRelOptInfo for {t,j} cannot produce a NULL for that column. It might\nbe quite useful to know that for the UniqueKeys patch.\nI read your comments again and find I miss your point before. So I'd summarizethe my current understanding to make sure we are in the same place for furtherworking.I want to define a notnullattrs on RelOptInfo struct. The not nullablemay comes from catalog definition or quals on the given query. For example:CREATE TABLE t(a INT NOT NULL, nullable INT);SELECT * FROM t; ==>  a is not null for sure by definition.SELECT * FROM t WHERE nullable > 3; ==> nullable is not null as well by qual.However the thing becomes complex with the below 2 cases.1. SELECT * FROM t INNER JOIN j on t.nullable = q.b;We know t.b will not be null **finally**. But the current plan may somethinglike this:                QUERY PLAN------------------------------------------ Merge Join   Merge Cond: (t.nullable = j.something)   ->  Sort         Sort Key: t.nullable         ->  Seq Scan on t   ->  Sort         Sort Key: j.something         ->  Seq Scan on j(8 rows)which means the Path \"Seq Scan on t\" still contains some null values. At least,we should not assume t.nullable is \"not nullable\" the base relation stage.2. SELECT t.a FROM j LEFT JOIN t ON t.b = t.a;Even the t.a is not null by definition, but it may have null **finally** due tothe outer join.My current patch doesn't handle the 2 cases well since t.nullable is marked asNOT NULL for both cases. \nI know there's another discussion here between Ashutosh and Tom about\nPathTarget's and Vars.   I had the Var idea too once myself [1] but it\nwas quickly shot down.  Tom's reasoning there in [1] seems legit.  I\nguess we'd need some sort of planner version of Var and never confuse\nit with the Parse version of Var.  That sounds like quite a big\nproject which would have quite a lot of code churn. I'm not sure how\nacceptable it would be to have Var represent both these things.  It\ngets complex when you do equal(var1, var2) and expect that to return\ntrue when everything matches apart from the notnull field. We\ncurrently have this issue with the \"location\" field and we even have a\nspecial macro which just ignores those in equalfuncs.c.  I imagine not\nmany people would like to expand that to other fields.\n\nIt would be good to agree on the correct representation for Vars that\ncannot produce NULLs so that we don't shut the door on classes of\noptimisation that require something other than what you need for your\ncase.\nLooks we have to maintain not null on the general RelOptInfo level rather than BaseRelOptInfo.  But I don't want to teach Var about the notnull so far.  The reasons are: 1).We need to maintain the Planner version and Parser version due to the VIEW case. 2). We have to ignore the extra part  for equal(Var, Var) . 3). Var is usually shared amongdifferent RelOptInfo. which means we have to maintain different copies for this purpose IIUC.I assume we want to know if a Var is nullable with a function like. is_var_notnullable(Var *var,  Relids relids).  If so, we can define the data as below:struct RelOptInfo {Bitmapset** notnullattrs;..}; After this we can implement the function as: boolis_var_notnullable(Var* var, Relids relids){  RelOptInfo *rel = find_rel_by_relids(reldis);  return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);}Probably we can make some hackers to reduce the notnullattrs's memory usageoverhead.Any thoughts? \nDavid\n\n[1] https://www.postgresql.org/message-id/flat/14678.1401639369%40sss.pgh.pa.us#d726d397f86755b64bb09d0c487f975f\n-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sat, 27 Mar 2021 17:47:18 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": ">\n> I assume we want to know if a Var is nullable with a function like.\n> is_var_notnullable(Var *var, Relids relids). If so, we can define the\n> data as below:\n>\n> struct RelOptInfo {\n>\n> Bitmapset** notnullattrs;\n> ..\n> };\n>\n> After this we can implement the function as:\n>\n> bool\n> is_var_notnullable(Var* var, Relids relids)\n> {\n> RelOptInfo *rel = find_rel_by_relids(reldis);\n> return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);\n> }\n>\n> Probably we can make some hackers to reduce the notnullattrs's memory usage\n> overhead.\n>\n>\nTo be more precise, to make the rel->notnullattrs shorter, we can do the\nfollowing methods:\n1). Relids only has single element, we can always use a 1-len array rather\nthan rel->varno\nelements. 2). For multi-elements relids, we use the max(varno) as the\nlength of rel->notnullattrs.\n3). For some cases, the notnullattrs of a baserel is not changed in later\nstages, we can just\nreuse the same Bitmapset * in later stages.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nI assume we want to know if a Var is nullable with a function like. is_var_notnullable(Var *var,  Relids relids).  If so, we can define the data as below:struct RelOptInfo {Bitmapset** notnullattrs;..}; After this we can implement the function as: boolis_var_notnullable(Var* var, Relids relids){  RelOptInfo *rel = find_rel_by_relids(reldis);  return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);}Probably we can make some hackers to reduce the notnullattrs's memory usageoverhead.To be more precise,  to make the rel->notnullattrs shorter, we can do the following methods:1). Relids only has single element, we can always use a 1-len array rather than rel->varnoelements. 2).  For multi-elements relids, we use the max(varno) as the length of rel->notnullattrs.3). For some cases,  the notnullattrs of a baserel is not changed in later stages, we can justreuse the same Bitmapset * in later stages.   -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 31 Mar 2021 08:44:53 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": ">\n> However the thing becomes complex with the below 2 cases.\n>\n> 1. SELECT * FROM t INNER JOIN j on t.nullable = q.b;\n> We know t.b will be not null **finally**. But the current plan may\n> something\n> like this:\n>\n> QUERY PLAN\n> ------------------------------------------\n> Merge Join\n> Merge Cond: (t.nullable = j.something)\n> -> Sort\n> Sort Key: t.nullable\n> -> Seq Scan on t\n> -> Sort\n> Sort Key: j.something\n> -> Seq Scan on j\n> (8 rows)\n>\n> which means the Path \"Seq Scan on t\" still contains some null values. At\n> least,\n> we should not assume t.nullable is \"not nullable\" the base relation stage.\n>\n> 2. SELECT t.a FROM j LEFT JOIN t ON t.b = t.a;\n> Even the t.a is not null by definition, but it may have null **finally**\n> due to\n> the outer join.\n>\n\nThe above 2 cases have been addressed by defining the notnullattrs on\nevery RelOptInfo, and maintaining them on every join. However, per offline\ndiscussion with David, IIUC, there is a more case to think about.\n\nCREATE TABLE t (a INT, b INT);\nSELECT * FROM t WHERE a = 1 and b = 2;\n\nWe know b is not null after we evaluate the qual b = 2, but it may still\nnullable when we just evaluate a = 1;\n\nI prefer to not handle it by saying the semantics of notnullattrs is correct\nafter we evaluate all the quals on its RelOptInfo.\n\n\n\n> It would be good to agree on the correct representation for Vars that\n>> cannot produce NULLs so that we don't shut the door on classes of\n>> optimisation that require something other than what you need for your\n>> case.\n>>\n>>\n> Looks we have to maintain not null on the general RelOptInfo level rather\n> than Base\n> RelOptInfo. But I don't want to teach Var about the notnull so far. The\n> reasons are: 1).\n> We need to maintain the Planner version and Parser version due to the VIEW\n> case.\n> 2). We have to ignore the extra part for equal(Var, Var) . 3). Var is\n> usually shared among\n> different RelOptInfo. which means we have to maintain different copies for\n> this purpose IIUC.\n>\n> I assume we want to know if a Var is nullable with a function like.\n> is_var_notnullable(Var *var, Relids relids). If so, we can define the\n> data as below:\n>\n> struct RelOptInfo {\n>\n> Bitmapset** notnullattrs;\n> ..\n> };\n>\n> After this we can implement the function as:\n>\n\n/*\n * is_var_notnullable\n * Check if the var is nullable for a given RelOptIno after\n * all the quals on it have been evaluated.\n *\n * var is the var to check, relids is the ids of a RelOptInfo\n * we will check on.\n */\nbool\nis_var_notnullable(Var* var, Relids relids)\n{\n RelOptInfo *rel = find_rel_by_relids(reldis);\n return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);\n}\n\nDo you think this is a reasonable solution?\n\n\n>\nbool\n> is_var_notnullable(Var* var, Relids relids)\n> {\n> RelOptInfo *rel = find_rel_by_relids(reldis);\n> return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);\n> }\n>\n> Probably we can make some hackers to reduce the notnullattrs's memory usage\n> overhead.\n>\n> --\nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHowever the thing becomes complex with the below 2 cases.1. SELECT * FROM t INNER JOIN j on t.nullable = q.b;We know t.b will be not null **finally**. But the current plan may somethinglike this:                QUERY PLAN------------------------------------------ Merge Join   Merge Cond: (t.nullable = j.something)   ->  Sort         Sort Key: t.nullable         ->  Seq Scan on t   ->  Sort         Sort Key: j.something         ->  Seq Scan on j(8 rows)which means the Path \"Seq Scan on t\" still contains some null values. At least,we should not assume t.nullable is \"not nullable\" the base relation stage.2. SELECT t.a FROM j LEFT JOIN t ON t.b = t.a;Even the t.a is not null by definition, but it may have null **finally** due tothe outer join.The above 2 cases have been addressed by defining the notnullattrs onevery RelOptInfo, and maintaining them on every join. However,  per offlinediscussion with David, IIUC,  there is a more case to think about. CREATE TABLE t (a INT, b INT);SELECT * FROM t WHERE a = 1 and b = 2;We know b is not null after we evaluate the qual b = 2,  but it may stillnullable when we just evaluate a = 1;  I prefer to not handle it by saying the semantics of notnullattrs is correctafter we evaluate all the quals on its RelOptInfo.  \nIt would be good to agree on the correct representation for Vars that\ncannot produce NULLs so that we don't shut the door on classes of\noptimisation that require something other than what you need for your\ncase.\nLooks we have to maintain not null on the general RelOptInfo level rather than BaseRelOptInfo.  But I don't want to teach Var about the notnull so far.  The reasons are: 1).We need to maintain the Planner version and Parser version due to the VIEW case. 2). We have to ignore the extra part  for equal(Var, Var) . 3). Var is usually shared amongdifferent RelOptInfo. which means we have to maintain different copies for this purpose IIUC.I assume we want to know if a Var is nullable with a function like. is_var_notnullable(Var *var,  Relids relids).  If so, we can define the data as below:struct RelOptInfo {Bitmapset** notnullattrs;..}; After this we can implement the function as: /* * is_var_notnullable *   Check if the var is nullable for a given RelOptIno after * all the quals on it have been evaluated.  *  * var is the var to check,  relids is the ids of a RelOptInfo * we will check on.  */boolis_var_notnullable(Var* var, Relids relids){  RelOptInfo *rel = find_rel_by_relids(reldis);  return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);}Do you think this is a reasonable solution?  boolis_var_notnullable(Var* var, Relids relids){  RelOptInfo *rel = find_rel_by_relids(reldis);  return bms_is_member(var->varattno, rel->notnullattrs[var->varno]);}Probably we can make some hackers to reduce the notnullattrs's memory usageoverhead.-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 7 Apr 2021 08:28:44 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Hi:\n I'd start to work on UniqueKey again, it would be great that we can target\nit\n to PG 15. The attached patch is just for the notnull_attrs. Since we can't\nsay\n a column is nullable or not without saying in which resultset, so I think\nattaching\nit to RelOptInfo is unavoidable. Here is how my patch works.\n\n@@ -686,6 +686,12 @@ typedef struct RelOptInfo\n /* default result targetlist for Paths scanning this relation */\n struct PathTarget *reltarget; /* list of Vars/Exprs, cost, width */\n\n+ Bitmapset **notnull_attrs; /* The attno which is not null after evaluating\n+ * all the quals on this relation, for baserel,\n+ * the len would always 1. and for others the array\n+ * index is relid from relids.\n+ */\n+\n\nFor baserel, it records the notnull attrs as a bitmapset and stores it to\nRelOptInfo->notnull_attrs[0]. As for the joinrel, suppose the relids is\n{1,3,\n5}, then the notnull_attrs[1/3/5] will be used to store notnull_attrs\nBitmapset\nfor relation 1,3,5 separately. I don't handle this stuff for all kinds of\nupper\nrelation and subquery so far since UniqueKey doesn't rely on it and looks\nmore stuff should be handled there.\n\nThe patch also included some debug messages in\nset_baserel/joinrel_notnullattrs\nand attached the test.sql for easier review. Any feedback is welcome and\nhope\nthis implementation would not block UniqueKey stuff.", "msg_date": "Sat, 3 Jul 2021 22:08:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Sun, 4 Jul 2021 at 02:08, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> I'd start to work on UniqueKey again, it would be great that we can target it\n> to PG 15. The attached patch is just for the notnull_attrs. Since we can't say\n> a column is nullable or not without saying in which resultset, so I think attaching\n> it to RelOptInfo is unavoidable. Here is how my patch works.\n\nI'd also like to see this work progress for PG15. My current thoughts\nare that Tom as mentioned another way to track nullability inside Var.\nIt would be a fairly big task to do that.\n\nTom, I'm wondering if you might get a chance to draw up a design for\nwhat you've got in mind with this? I assume adding a new field in\nVar, but I'm drawing a few blanks on how things might work for equal()\nwhen one Var has the field set and another does not.\n\nDavid\n\n\n", "msg_date": "Tue, 6 Jul 2021 21:34:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Tom, I'm wondering if you might get a chance to draw up a design for\n> what you've got in mind with this? I assume adding a new field in\n> Var, but I'm drawing a few blanks on how things might work for equal()\n> when one Var has the field set and another does not.\n\nAs I said before, it hasn't progressed much past the handwaving stage,\nbut it does seem like it's time to get it done. I doubt I'll have any\ncycles for it during the commitfest, but maybe I can devote a block of\ntime during August.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jul 2021 09:14:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Jul 6, 2021 at 9:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Tom, I'm wondering if you might get a chance to draw up a design for\n> > what you've got in mind with this? I assume adding a new field in\n> > Var, but I'm drawing a few blanks on how things might work for equal()\n> > when one Var has the field set and another does not.\n>\n> As I said before, it hasn't progressed much past the handwaving stage,\n> but it does seem like it's time to get it done. I doubt I'll have any\n> cycles for it during the commitfest, but maybe I can devote a block of\n> time during August.\n>\n> regards, tom lane\n>\n\nLooking forward to watching this change closely, thank you both David and\nTom!\nBut I still don't understand what the faults my way have , do you mind\ntelling the\ndetails?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Jul 6, 2021 at 9:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes:\n> Tom, I'm wondering if you might get a chance to draw up a design for\n> what you've got in mind with this?  I assume adding a new field in\n> Var, but I'm drawing a few blanks on how things might work for equal()\n> when one Var has the field set and another does not.\n\nAs I said before, it hasn't progressed much past the handwaving stage,\nbut it does seem like it's time to get it done.  I doubt I'll have any\ncycles for it during the commitfest, but maybe I can devote a block of\ntime during August.\n\n                        regards, tom lane\nLooking forward to watching this change closely, thank you both David and Tom!But I still don't understand what the faults my way have , do you mind telling thedetails? -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 7 Jul 2021 09:03:59 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Wed, 7 Jul 2021 at 13:04, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Looking forward to watching this change closely, thank you both David and Tom!\n> But I still don't understand what the faults my way have , do you mind telling the\n> details?\n\nThe problem is that we don't need 6 different ways to determine if a\nVar can be NULL or not. You're proposing to add a method using\nBitmapsets and Tom has some proposing ideas around tracking\nnullability in Vars. We don't need both.\n\nIt seems to me that having it in Var allows us to have a much finer\ngradient about where exactly a Var can be NULL.\n\nFor example: SELECT nullablecol FROM tab WHERE nullablecol = <value>;\n\nIf the equality operator is strict then the nullablecol can be NULL in\nthe WHERE clause but not in the SELECT list. Tom's idea should allow\nus to determine both of those things but your idea cannot tell them\napart, so, in theory at least, Tom's idea seems better to me.\n\nDavid\n\n\n", "msg_date": "Wed, 7 Jul 2021 13:20:24 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": ">\n>\n>\n> For example: SELECT nullablecol FROM tab WHERE nullablecol = <value>;\n>\n> If the equality operator is strict then the nullablecol can be NULL in\n> the WHERE clause but not in the SELECT list. Tom's idea should allow\n> us to determine both of those things but your idea cannot tell them\n> apart, so, in theory at least, Tom's idea seems better to me.\n>\n> David\n>\n\nThat's really something I can't do, thanks for the explanation.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\nFor example: SELECT nullablecol FROM tab WHERE nullablecol = <value>;\n\nIf the equality operator is strict then the nullablecol can be NULL in\nthe WHERE clause but not in the SELECT list. Tom's idea should allow\nus to determine both of those things but your idea cannot tell them\napart, so, in theory at least, Tom's idea seems better to me.\n\nDavid\nThat's really something I can't do,  thanks for the explanation. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 7 Jul 2021 13:08:34 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Jul 6, 2021 at 5:34 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 4 Jul 2021 at 02:08, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > I'd start to work on UniqueKey again, it would be great that we can target it\n> > to PG 15. The attached patch is just for the notnull_attrs. Since we can't say\n> > a column is nullable or not without saying in which resultset, So I think attaching\n> > it to RelOptInfo is unavoidable. Here is how my patch works.\n>\n> I'd also like to see this work progress for PG15.\n\n\nThank you David!\n\nI am re-designing/implementing the UniqueKey, but it is better to have\na design review as soon as possible. This writing is for that. To make the\nreview easier, I also uploaded my in-completed patch (Correct, runable\nwith testcase).\n\nMain changes are:\n1. Use EC instead of expr, to cover more UniqueKey case.\n2. Redesign the UniqueKey as below:\n\n@@ -246,6 +246,7 @@ struct PlannerInfo\n* subquery outputs */\n\nList *eq_classes; /* list of active EquivalenceClasses */\n+ List *unique_exprs; /* List of unique expr */\n\n bool ec_merging_done; /* set true once ECs are canonical */\n\n+typedef struct UniqueKey\n+{\n+ NodeTag type;\n+ Bitmapset *unique_expr_indexes;\n+ bool multi_nulls;\n+} UniqueKey;\n+\n\nPlannerInfo.unique_exprs is a List of unique exprs. Unique Exprs is a set of\nEquivalenceClass. for example:\n\nCREATE TABLE T1(A INT NOT NULL, B INT NOT NULL, C INT, pk INT primary key);\nCREATE UNIQUE INDEX ON t1(a, b);\n\nSELECT DISTINCT * FROM T1 WHERE a = c;\n\nThen we would have PlannerInfo.unique_exprs as below\n[\n[EC(a, c), EC(b)],\n[EC(pk)]\n]\n\nRelOptInfo(t1) would have 2 UniqueKeys.\nUniqueKey1 {unique_expr_indexes=bms{0}, multinull=false]\nUniqueKey2 {unique_expr_indexes=bms{1}, multinull=false]\n\nThe design will benefit many table joins cases. For example, 10 tables\njoin. Each table has a primary key (a, b). Then we would have a UniqueKey like\nthis.\n\nJoinRel{1,2,3,4} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b}\nJoinRel{1,2,3,4, 5} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b t5.a t5.b}\n\nThis would be memory consuming and building such UniqueKey is CPU consuming as\nwell. With the new design, we can store it as\n\nPlannerInfo.unique_exprs =\n[\n[t1.a, t1.b], -- EC is ignored in document.\n[t2.a, t2.b],\n[t3.a, t3.b],\n[t4.a, t4.b],\n[t5.a, t5.b],\n[t6.a, t6.b],\n[t7.a, t7.b],\n[t8.a, t8.b],\n[t9.a, t9.b],\n[t10.a, t10.b],\n]\n\nJoinRel{1,2,3,4} - Bitmapset{0,1,2,3} -- one bitmapword.\nJoinRel{1,2,3,4,5} - Bitmapset{0,1,2,3,4} -- one bitmapword.\n\n3. Define a new SingleRow node and use it in joinrel as well.\n\n+typedef struct SingleRow\n+{\n+ NodeTag type;\n+ Index relid;\n+} SingleRow;\n\nSELECT * FROM t1, t2 WHERE t2.pk = 1;\n\nPlannerInfo.unique_exprs\n[\n[t1.a, t1.b],\nSingleRow{relid=2}\n]\n\nJoinRel{t1} - Bitmapset{0}\nJoinRel{t2} - Bitmapset{1}\nJoinRelt{1, 2} Bitmapset{0, 1} -- SingleRow will never be expanded to dedicated\nexprs.\n\n4. Cut the useless UniqueKey totally on the baserel stage based on\n root->distinct_pathkey. If we want to use it anywhere else, I think this\n design is OK as well. for example: group by UniqueKey.\n\n5. Implemented the relation_is_distinct_for(root, rel, distinct_pathkey)\n effectively. Here I used distinct_pathkey rather than\n Query->distinctClause.\n\nSince I implemented the EC in PlannerInfo.unique_exprs point to the\nPathKey.pk_eqclass, so we can compare the address directly with '=', rather than\nequal(a, b). (since qual would check the address as well, so even I use equal,\nthe performance is good as well). SingleRow is handled as well for this case.\n\nYou can check the more details in the attached patch. Any feedback is welcome.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 7 Jul 2021 17:00:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "> 4. Cut the useless UniqueKey totally on the baserel stage based on\n> root->distinct_pathkey. If we want to use it anywhere else, I think this\n> design is OK as well. for example: group by UniqueKey.\n>\n\nThe intention of this is I want to cut off the useless UniqueKey ASAP. In the\nprevious patch, I say \"if the unique_exprs not exists in root->distinct_paths,\nthen it is useless\". However This looks only works for single rel. As for the\njoinrel, we have to maintain the UniqueKey on mergeable join clause for the case\nlike below.\n\nSELECT DISTINCT t1.pk FROM t1, t2 WHERE t1.a = t2.pk;\nor\nSELECT DISTINCT t1.pk FROM t1 left join t2 on t1.a = t2.pk;\n\nIn this case, t2.pk isn't shown in distinct_pathkey, but it is still useful at\nthe join stage and not useful after joining.\n\nSo how can we maintain the UniqueKey like t2.pk?\n1). If t2.pk exists in root->eq_classes, keep it.\n2). If t2.pk doesn't exist in RelOptInfo->reltarget after joining, discard it.\n\nStep 1 is not so bad since we have RelOptInfo.eclass_indexes. However step 2\nlooks pretty boring since we have to check on every RelOptInfo and we may have\nlots of RelOptInfo.\n\nAny suggestions on this?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Tue, 13 Jul 2021 17:55:21 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Jul 13, 2021 at 5:55 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> > 4. Cut the useless UniqueKey totally on the baserel stage based on\n> > root->distinct_pathkey. If we want to use it anywhere else, I think this\n> > design is OK as well. for example: group by UniqueKey.\n> >\n>\n> The intention of this is I want to cut off the useless UniqueKey ASAP. In the\n> previous patch, I say \"if the unique_exprs not exists in root->distinct_paths,\n> then it is useless\". However This looks only works for single rel. As for the\n> joinrel, we have to maintain the UniqueKey on mergeable join clause for the case\n> like below.\n>\n> SELECT DISTINCT t1.pk FROM t1, t2 WHERE t1.a = t2.pk;\n> or\n> SELECT DISTINCT t1.pk FROM t1 left join t2 on t1.a = t2.pk;\n>\n> In this case, t2.pk isn't shown in distinct_pathkey, but it is still useful at\n> the join stage and not useful after joining.\n>\n> So how can we maintain the UniqueKey like t2.pk?\n> 1). If t2.pk exists in root->eq_classes, keep it.\n> 2). If t2.pk doesn't exist in RelOptInfo->reltarget after joining, discard it.\n>\n> Step 1 is not so bad since we have RelOptInfo.eclass_indexes. However step 2\n> looks pretty boring since we have to check on every RelOptInfo and we may have\n> lots of RelOptInfo.\n>\n> Any suggestions on this?\n>\n\nJust a function like truncate_useless_pathkey would be OK. For that we need\nto handle uniquekey_useful_for_merging and uniquekey_useful_for_distinct.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Thu, 15 Jul 2021 17:33:53 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Hi:\n\nI have finished the parts for baserel, joinrel, subquery, distinctrel. I think\nthe hardest ones have been verified. Here is the design overview.\n\n1. Use EC instead of expr to cover more UniqueKey cases.\n2. Redesign the UniqueKey as below:\n\n@@ -246,6 +246,7 @@ struct PlannerInfo\n\nList *eq_classes; /* list of active EquivalenceClasses */\n+ List *unique_exprs; /* List of unique expr */\n\n bool ec_merging_done; /* set true once ECs are canonical */\n\n+typedef struct UniqueKey\n+{\n+ NodeTag type;\n+ Bitmapset *unique_expr_indexes;\n+ bool multi_nulls;\n+} UniqueKey;\n+\n\nPlannerInfo.unique_exprs is a List of unique exprs. Unique Exprs is a set of\nEquivalenceClass. for example:\n\nCREATE TABLE T1(A INT NOT NULL, B INT NOT NULL, C INT, pk INT primary key);\nCREATE UNIQUE INDEX ON t1(a, b);\n\nSELECT DISTINCT * FROM T1 WHERE a = c;\n\nThen we would have PlannerInfo.unique_exprs as below\n[\n[EC(a, c), EC(b)],\n[EC(pk)]\n]\n\nRelOptInfo(t1) would have 2 UniqueKeys.\nUniqueKey1 {unique_expr_indexes=bms{0}, multinull=false]\nUniqueKey2 {unique_expr_indexes=bms{1}, multinull=false]\n\nThe design will benefit many table joins cases. For instance a 10- tables join,\neach table has a primary key (a, b). Then we would have a UniqueKey like\nthis.\n\nJoinRel{1,2,3,4} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b}\nJoinRel{1,2,3,4,5} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b t5.a t5.b}\n\nWhen more tables are joined, the list would be longer and longer, build the list\nconsumes both CPU cycles and memory.\n\nWith the method as above, we can store it as:\n\nroot->unique_exprs = /* All the UniqueKey is stored once */\n[\n[t1.a, t1.b], -- EC is ignored in document.\n[t2.a, t2.b],\n[t3.a, t3.b],\n[t4.a, t4.b],\n[t5.a, t5.b],\n[t6.a, t6.b],\n[t7.a, t7.b],\n[t8.a, t8.b],\n[t9.a, t9.b],\n[t10.a, t10.b],\n]\n\nJoinRel{1,2,3,4} - Bitmapset{0,1,2,3} -- one bitmapword.\nJoinRel{1,2,3,4,5} - Bitmapset{0,1,2,3,4} -- one bitmapword.\n\nThe member in the bitmap is the index of root->unique_exprs rather than\nroot->eq_classes because we need to store the SingleRow case in\nroot->unqiue_exprs as well.\n\n3. Define a new SingleRow node and use it in joinrel as well.\n\n+typedef struct SingleRow\n+{\n+ NodeTag type;\n+ Index relid;\n+} SingleRow;\n\nSELECT * FROM t1, t2 WHERE t2.pk = 1;\n\nroot->unique_exprs\n[\n[t1.a, t1.b],\nSingleRow{relid=2}\n]\n\nJoinRel{t1} - Bitmapset{0}\nJoinRel{t2} - Bitmapset{1}\n\nJoinRelt{1, 2} Bitmapset{0, 1} -- SingleRow will never be expanded to dedicated\nexprs.\n\n4. Interesting UniqueKey to remove the Useless UniqueKey as soon as possible.\n\nThe overall idea is similar with PathKey, I distinguish the UniqueKey for\ndistinct purpose and useful_for_merging purpose.\n\nSELECT DISTINCT pk FROM t; -- OK, maintain it all the time, just like\nroot->query_pathkey.\n\nSELECT DISTINCT t2.c FROM t1, t2 WHERE t1.d = t2.pk; -- T2's UniqueKey PK is\nuse before t1 join t2, but not useful after it.\n\nCurrently the known issue I didn't pass the \"interesting UniqueKey\" info to\nsubquery well [2], I'd like to talk more about this when we discuss about\nUnqiueKey on subquery part.\n\n\n5. relation_is_distinct_for\n\nNow I design the function as\n\n+ bool\n+ relation_is_distinct_for(PlannerInfo *root, RelOptInfo *rel, List\n *distinct_pathkey)\n\nIt is \"List *distinct_pathkey\", rather than \"List *exprs\". The reason pathkey\nhas EC in it, and all the UniqueKey has EC as well. if so, the subset-like\nchecking is very effective. As for the distinct/group as no-op case, we have\npathkey all the time. The only drawback of it is some clauses are not-sortable,\nin this case, the root->distinct_pathkey and root->group_pathkey is not\nset. However it should be rare by practice, so I ignore this part for\nnow. After all, I can have a relation_is_disticnt_for version for Exprs. I just\nnot implemented it so far.\n\n6. EC overhead in UnqiueKey & UNION case.\n\nUntil now I didn't create any new EC for the UniqueKey case, I just used the\nexisting ones. However I can't handle the case like\n\nSELECT a, b FROM t1\nUNION\nSELECT x, y FROM t2;\n\nIn this case, there is no EC created with existing code. and I don't want to\ncreate them for the UniqueKey case as well. so my plan is just not to handle\nthe case for UNION.\n\nSince we need some big effort from the reviewer, I split the patch into many\nsmaller chunks.\n\nPatch 1 / Patch 2: I just split some code which UniqueKey uses but not very\ninterrelated. Splitting them out to reduce the core patch size.\nPatch 3: still the notnull stuff. This one doesn't play a big role overall,\neven if the design is changed at last, we can just modify some small stuff. IMO,\nI don't think it is a blocker issue to move on.\nPatch 4: Support the UniqueKey for baserel.\nPatch 5: Support the UniqueKey for joinrel.\nPatch 6: Support the UniqueKey for some upper relation, like distinctrel,\ngroupby rel.\n\nI'd suggest moving on like this:\n1. Have an overall review to see if any outstanding issues the patch has.\n2. If not, we can review and commit patch 1 & patch 2 to reduce the patch size.\n3. Decide which method to take for not null stuff. IMO Tom's method\nwould be a bit\n complex and the benefit is not very clear to me[1]. So the choices\n include: a). move on UniqueKey stuff until Tom's method is ready. b). Move\n on the UniqueKey with my notnull way, and changes to Tom's method when\n necessary. I prefer method b).\n4. Review & Commit the UniqueKey for BaseRel part.\n5. Review & Commit the UniqueKey for JoinRel part.\n6. Review & Commit the UniqueKey for SubQuery part *without* the Interesting\n UniqueKey well handled.\n7. Review & Commit the UniqueKey for SubQuery part *with* the Interesting\n UniqueKey well handled.\n8. Discuss about the UniqueKey on partitioned rel case and develop/review/commit\n it.\n9. Apply UniqueKey stuff on more user cases rather than DISTINCT.\n\nWhat do you think about this?\n\n[1] https://www.postgresql.org/message-id/CAApHDvo5b2pYX%2BdbPy%2BysGBSarezRSfXthX32TZNFm0wPdfKGQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAKU4AWo6-%3D9mg3UQ5UJhGCMw6wyTPyPGgV5oh6dFvwEN%3D%2Bhb_w%40mail.gmail.com\n\n\nThanks", "msg_date": "Sun, 15 Aug 2021 22:33:13 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Sun, Aug 15, 2021 at 7:33 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n>\n> I have finished the parts for baserel, joinrel, subquery, distinctrel. I\n> think\n> the hardest ones have been verified. Here is the design overview.\n>\n> 1. Use EC instead of expr to cover more UniqueKey cases.\n> 2. Redesign the UniqueKey as below:\n>\n> @@ -246,6 +246,7 @@ struct PlannerInfo\n>\n> List *eq_classes; /* list of active EquivalenceClasses */\n> + List *unique_exprs; /* List of unique expr */\n>\n> bool ec_merging_done; /* set true once ECs are canonical */\n>\n> +typedef struct UniqueKey\n> +{\n> + NodeTag type;\n> + Bitmapset *unique_expr_indexes;\n> + bool multi_nulls;\n> +} UniqueKey;\n> +\n>\n> PlannerInfo.unique_exprs is a List of unique exprs. Unique Exprs is a set\n> of\n> EquivalenceClass. for example:\n>\n> CREATE TABLE T1(A INT NOT NULL, B INT NOT NULL, C INT, pk INT primary\n> key);\n> CREATE UNIQUE INDEX ON t1(a, b);\n>\n> SELECT DISTINCT * FROM T1 WHERE a = c;\n>\n> Then we would have PlannerInfo.unique_exprs as below\n> [\n> [EC(a, c), EC(b)],\n> [EC(pk)]\n> ]\n>\n> RelOptInfo(t1) would have 2 UniqueKeys.\n> UniqueKey1 {unique_expr_indexes=bms{0}, multinull=false]\n> UniqueKey2 {unique_expr_indexes=bms{1}, multinull=false]\n>\n> The design will benefit many table joins cases. For instance a 10- tables\n> join,\n> each table has a primary key (a, b). Then we would have a UniqueKey like\n> this.\n>\n> JoinRel{1,2,3,4} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b}\n> JoinRel{1,2,3,4,5} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b t5.a\n> t5.b}\n>\n> When more tables are joined, the list would be longer and longer, build\n> the list\n> consumes both CPU cycles and memory.\n>\n> With the method as above, we can store it as:\n>\n> root->unique_exprs = /* All the UniqueKey is stored once */\n> [\n> [t1.a, t1.b], -- EC is ignored in document.\n> [t2.a, t2.b],\n> [t3.a, t3.b],\n> [t4.a, t4.b],\n> [t5.a, t5.b],\n> [t6.a, t6.b],\n> [t7.a, t7.b],\n> [t8.a, t8.b],\n> [t9.a, t9.b],\n> [t10.a, t10.b],\n> ]\n>\n> JoinRel{1,2,3,4} - Bitmapset{0,1,2,3} -- one bitmapword.\n> JoinRel{1,2,3,4,5} - Bitmapset{0,1,2,3,4} -- one bitmapword.\n>\n> The member in the bitmap is the index of root->unique_exprs rather than\n> root->eq_classes because we need to store the SingleRow case in\n> root->unqiue_exprs as well.\n>\n> 3. Define a new SingleRow node and use it in joinrel as well.\n>\n> +typedef struct SingleRow\n> +{\n> + NodeTag type;\n> + Index relid;\n> +} SingleRow;\n>\n> SELECT * FROM t1, t2 WHERE t2.pk = 1;\n>\n> root->unique_exprs\n> [\n> [t1.a, t1.b],\n> SingleRow{relid=2}\n> ]\n>\n> JoinRel{t1} - Bitmapset{0}\n> JoinRel{t2} - Bitmapset{1}\n>\n> JoinRelt{1, 2} Bitmapset{0, 1} -- SingleRow will never be expanded to\n> dedicated\n> exprs.\n>\n> 4. Interesting UniqueKey to remove the Useless UniqueKey as soon as\n> possible.\n>\n> The overall idea is similar with PathKey, I distinguish the UniqueKey for\n> distinct purpose and useful_for_merging purpose.\n>\n> SELECT DISTINCT pk FROM t; -- OK, maintain it all the time, just like\n> root->query_pathkey.\n>\n> SELECT DISTINCT t2.c FROM t1, t2 WHERE t1.d = t2.pk; -- T2's UniqueKey PK\n> is\n> use before t1 join t2, but not useful after it.\n>\n> Currently the known issue I didn't pass the \"interesting UniqueKey\" info to\n> subquery well [2], I'd like to talk more about this when we discuss about\n> UnqiueKey on subquery part.\n>\n>\n> 5. relation_is_distinct_for\n>\n> Now I design the function as\n>\n> + bool\n> + relation_is_distinct_for(PlannerInfo *root, RelOptInfo *rel, List\n> *distinct_pathkey)\n>\n> It is \"List *distinct_pathkey\", rather than \"List *exprs\". The reason\n> pathkey\n> has EC in it, and all the UniqueKey has EC as well. if so, the subset-like\n> checking is very effective. As for the distinct/group as no-op case, we\n> have\n> pathkey all the time. The only drawback of it is some clauses are\n> not-sortable,\n> in this case, the root->distinct_pathkey and root->group_pathkey is not\n> set. However it should be rare by practice, so I ignore this part for\n> now. After all, I can have a relation_is_disticnt_for version for Exprs. I\n> just\n> not implemented it so far.\n>\n> 6. EC overhead in UnqiueKey & UNION case.\n>\n> Until now I didn't create any new EC for the UniqueKey case, I just used\n> the\n> existing ones. However I can't handle the case like\n>\n> SELECT a, b FROM t1\n> UNION\n> SELECT x, y FROM t2;\n>\n> In this case, there is no EC created with existing code. and I don't want\n> to\n> create them for the UniqueKey case as well. so my plan is just not to\n> handle\n> the case for UNION.\n>\n> Since we need some big effort from the reviewer, I split the patch into\n> many\n> smaller chunks.\n>\n> Patch 1 / Patch 2: I just split some code which UniqueKey uses but not\n> very\n> interrelated. Splitting them out to reduce the core patch size.\n> Patch 3: still the notnull stuff. This one doesn't play a big role\n> overall,\n> even if the design is changed at last, we can just modify some small\n> stuff. IMO,\n> I don't think it is a blocker issue to move on.\n> Patch 4: Support the UniqueKey for baserel.\n> Patch 5: Support the UniqueKey for joinrel.\n> Patch 6: Support the UniqueKey for some upper relation, like distinctrel,\n> groupby rel.\n>\n> I'd suggest moving on like this:\n> 1. Have an overall review to see if any outstanding issues the patch has.\n> 2. If not, we can review and commit patch 1 & patch 2 to reduce the patch\n> size.\n> 3. Decide which method to take for not null stuff. IMO Tom's method\n> would be a bit\n> complex and the benefit is not very clear to me[1]. So the choices\n> include: a). move on UniqueKey stuff until Tom's method is ready. b).\n> Move\n> on the UniqueKey with my notnull way, and changes to Tom's method when\n> necessary. I prefer method b).\n> 4. Review & Commit the UniqueKey for BaseRel part.\n> 5. Review & Commit the UniqueKey for JoinRel part.\n> 6. Review & Commit the UniqueKey for SubQuery part *without* the\n> Interesting\n> UniqueKey well handled.\n> 7. Review & Commit the UniqueKey for SubQuery part *with* the Interesting\n> UniqueKey well handled.\n> 8. Discuss about the UniqueKey on partitioned rel case and\n> develop/review/commit\n> it.\n> 9. Apply UniqueKey stuff on more user cases rather than DISTINCT.\n>\n> What do you think about this?\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvo5b2pYX%2BdbPy%2BysGBSarezRSfXthX32TZNFm0wPdfKGQ%40mail.gmail.com\n> [2]\n> https://www.postgresql.org/message-id/CAKU4AWo6-%3D9mg3UQ5UJhGCMw6wyTPyPGgV5oh6dFvwEN%3D%2Bhb_w%40mail.gmail.com\n>\n>\n> Thanks\n>\nHi,\nFor v3-0005-Support-UniqueKey-on-JoinRel.patch :\n\n+static void populate_joinrel_composited_uniquekey(PlannerInfo *root,\n\npopulate_joinrel_composited_uniquekey\n-> populate_joinrel_composite_uniquekey (without the trailing d for\ncomposite)\n\nFor populate_joinrel_uniquekeys():\n\n+ foreach(lc, outerrel->uniquekeys)\n+ {\n...\n+ return;\n\nShould the remaining unique keys be considered ?\n\nFor populate_joinrel_uniquekey_for_rel():\n\n+ else if (bms_equal(r->right_relids, rel->relids) && r->left_ec !=\nNULL)\n+ {\n+ other_ecs = lappend(other_ecs, r->right_ec);\n+ other_relids = bms_add_members(other_relids, r->left_relids);\n\nIt seems the append to other_ecs is the same as the one for the\n`bms_equal(r->left_relids, rel->relids) && r->right_ec != NULL` case. Is\nthis correct ?\n\nCheers\n\nOn Sun, Aug 15, 2021 at 7:33 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi:\n\nI have finished the parts for baserel, joinrel, subquery, distinctrel. I think\nthe hardest ones have been verified.  Here is the design overview.\n\n1. Use EC instead of expr to cover more UniqueKey cases.\n2. Redesign the UniqueKey as below:\n\n@@ -246,6 +246,7 @@ struct PlannerInfo\n\nList   *eq_classes; /* list of active EquivalenceClasses */\n+ List   *unique_exprs; /* List of unique expr */\n\n  bool ec_merging_done; /* set true once ECs are canonical */\n\n+typedef struct UniqueKey\n+{\n+ NodeTag type;\n+ Bitmapset *unique_expr_indexes;\n+ bool multi_nulls;\n+} UniqueKey;\n+\n\nPlannerInfo.unique_exprs is a List of unique exprs.  Unique Exprs is a set of\nEquivalenceClass. for example:\n\nCREATE TABLE T1(A INT NOT NULL, B INT NOT NULL, C INT,  pk INT primary key);\nCREATE UNIQUE INDEX ON t1(a, b);\n\nSELECT DISTINCT * FROM T1 WHERE a = c;\n\nThen we would have PlannerInfo.unique_exprs as below\n[\n[EC(a, c), EC(b)],\n[EC(pk)]\n]\n\nRelOptInfo(t1) would have 2 UniqueKeys.\nUniqueKey1 {unique_expr_indexes=bms{0}, multinull=false]\nUniqueKey2 {unique_expr_indexes=bms{1}, multinull=false]\n\nThe design will benefit many table joins cases. For instance a 10- tables join,\neach table has a primary key (a, b).  Then we would have a UniqueKey like\nthis.\n\nJoinRel{1,2,3,4} -  {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b}\nJoinRel{1,2,3,4,5} -  {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b t5.a t5.b}\n\nWhen more tables are joined, the list would be longer and longer, build the list\nconsumes both CPU cycles and memory.\n\nWith the method as above, we can store it as:\n\nroot->unique_exprs =  /* All the UniqueKey is stored once */\n[\n[t1.a, t1.b], -- EC is ignored in document.\n[t2.a, t2.b],\n[t3.a, t3.b],\n[t4.a, t4.b],\n[t5.a, t5.b],\n[t6.a, t6.b],\n[t7.a, t7.b],\n[t8.a, t8.b],\n[t9.a, t9.b],\n[t10.a, t10.b],\n]\n\nJoinRel{1,2,3,4} -  Bitmapset{0,1,2,3} -- one bitmapword.\nJoinRel{1,2,3,4,5} -  Bitmapset{0,1,2,3,4} -- one bitmapword.\n\nThe member in the bitmap is the index of root->unique_exprs rather than\nroot->eq_classes because we need to store the SingleRow case in\nroot->unqiue_exprs as well.\n\n3. Define a new SingleRow node and use it in joinrel as well.\n\n+typedef struct SingleRow\n+{\n+ NodeTag type;\n+ Index relid;\n+} SingleRow;\n\nSELECT * FROM t1, t2 WHERE t2.pk = 1;\n\nroot->unique_exprs\n[\n[t1.a, t1.b],\nSingleRow{relid=2}\n]\n\nJoinRel{t1} - Bitmapset{0}\nJoinRel{t2} - Bitmapset{1}\n\nJoinRelt{1, 2} Bitmapset{0, 1} -- SingleRow will never be expanded to dedicated\nexprs.\n\n4. Interesting UniqueKey to remove the Useless UniqueKey as soon as possible.\n\nThe overall idea is similar with PathKey, I distinguish the UniqueKey for\ndistinct purpose and useful_for_merging purpose.\n\nSELECT DISTINCT pk FROM  t; -- OK, maintain it all the time, just like\nroot->query_pathkey.\n\nSELECT DISTINCT t2.c FROM t1, t2 WHERE t1.d = t2.pk; -- T2's UniqueKey PK is\nuse before t1 join t2, but not useful after it.\n\nCurrently the known issue I didn't pass the \"interesting UniqueKey\" info to\nsubquery well [2], I'd like to talk more about this when we discuss about\nUnqiueKey on subquery part.\n\n\n5. relation_is_distinct_for\n\nNow I design the function as\n\n+ bool\n+ relation_is_distinct_for(PlannerInfo *root, RelOptInfo *rel, List\n  *distinct_pathkey)\n\nIt is \"List *distinct_pathkey\", rather than \"List *exprs\". The reason pathkey\nhas EC in it, and all the UniqueKey has EC as well. if so, the subset-like\nchecking is very effective.  As for the distinct/group as no-op case, we have\npathkey all the time. The only drawback of it is some clauses are not-sortable,\nin this case, the root->distinct_pathkey and root->group_pathkey is not\nset. However it should be rare by practice, so I ignore this part for\nnow. After all, I can have a relation_is_disticnt_for version for Exprs. I just\nnot implemented it so far.\n\n6. EC overhead in UnqiueKey & UNION case.\n\nUntil now I didn't create any new EC for the UniqueKey case, I just used the\nexisting ones. However I can't handle the case like\n\nSELECT a, b FROM t1\nUNION\nSELECT x, y FROM t2;\n\nIn this case, there is no EC created with existing code. and I don't want to\ncreate them for the UniqueKey case as well.  so my plan is just not to handle\nthe case for UNION.\n\nSince we need some big effort from the reviewer, I split the patch into many\nsmaller chunks.\n\nPatch 1 / Patch 2:  I just split some code which UniqueKey uses but not very\ninterrelated. Splitting them out to reduce the core patch size.\nPatch 3:  still the notnull stuff.  This one doesn't play a big role overall,\neven if the design is changed at last, we can just modify some small stuff. IMO,\nI don't think it is a blocker issue to move on.\nPatch 4:  Support the UniqueKey for baserel.\nPatch 5: Support the UniqueKey for joinrel.\nPatch 6: Support the UniqueKey for some upper relation, like distinctrel,\ngroupby rel.\n\nI'd suggest moving on like this:\n1. Have an overall review to see if any outstanding issues the patch has.\n2. If not, we can review and commit patch 1 & patch 2 to reduce the patch size.\n3. Decide which method to take for not null stuff. IMO Tom's method\nwould be a bit\n   complex and the benefit is not very clear to me[1].  So the choices\n   include: a). move on UniqueKey stuff until Tom's method is ready. b). Move\n   on the UniqueKey with my notnull way, and changes to Tom's method when\n   necessary. I prefer method b).\n4. Review & Commit the UniqueKey for BaseRel part.\n5. Review & Commit the UniqueKey for JoinRel part.\n6. Review & Commit the UniqueKey for SubQuery part *without* the Interesting\n   UniqueKey well handled.\n7. Review & Commit the UniqueKey for SubQuery part *with* the Interesting\n   UniqueKey well handled.\n8. Discuss about the UniqueKey on partitioned rel case and develop/review/commit\n   it.\n9. Apply UniqueKey stuff on more user cases rather than DISTINCT.\n\nWhat do you think about this?\n\n[1] https://www.postgresql.org/message-id/CAApHDvo5b2pYX%2BdbPy%2BysGBSarezRSfXthX32TZNFm0wPdfKGQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAKU4AWo6-%3D9mg3UQ5UJhGCMw6wyTPyPGgV5oh6dFvwEN%3D%2Bhb_w%40mail.gmail.com\n\n\nThanksHi,For v3-0005-Support-UniqueKey-on-JoinRel.patch :+static void populate_joinrel_composited_uniquekey(PlannerInfo *root,populate_joinrel_composited_uniquekey -> populate_joinrel_composite_uniquekey (without the trailing d for composite)For populate_joinrel_uniquekeys():+       foreach(lc, outerrel->uniquekeys)+       {...+           return;Should the remaining unique keys be considered ?For populate_joinrel_uniquekey_for_rel():+       else if (bms_equal(r->right_relids, rel->relids) && r->left_ec != NULL)+       {+           other_ecs = lappend(other_ecs, r->right_ec);+           other_relids = bms_add_members(other_relids, r->left_relids);It seems the append to other_ecs is the same as the one for the `bms_equal(r->left_relids, rel->relids) && r->right_ec != NULL` case. Is this correct ?Cheers", "msg_date": "Sun, 15 Aug 2021 09:41:17 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Hi Zhihong,\n\nOn Mon, Aug 16, 2021 at 12:35 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Sun, Aug 15, 2021 at 7:33 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>> Hi:\n>>\n>> I have finished the parts for baserel, joinrel, subquery, distinctrel. I think\n>> the hardest ones have been verified. Here is the design overview.\n>>\n>> 1. Use EC instead of expr to cover more UniqueKey cases.\n>> 2. Redesign the UniqueKey as below:\n>>\n>> @@ -246,6 +246,7 @@ struct PlannerInfo\n>>\n>> List *eq_classes; /* list of active EquivalenceClasses */\n>> + List *unique_exprs; /* List of unique expr */\n>>\n>> bool ec_merging_done; /* set true once ECs are canonical */\n>>\n>> +typedef struct UniqueKey\n>> +{\n>> + NodeTag type;\n>> + Bitmapset *unique_expr_indexes;\n>> + bool multi_nulls;\n>> +} UniqueKey;\n>> +\n>>\n>> PlannerInfo.unique_exprs is a List of unique exprs. Unique Exprs is a set of\n>> EquivalenceClass. for example:\n>>\n>> CREATE TABLE T1(A INT NOT NULL, B INT NOT NULL, C INT, pk INT primary key);\n>> CREATE UNIQUE INDEX ON t1(a, b);\n>>\n>> SELECT DISTINCT * FROM T1 WHERE a = c;\n>>\n>> Then we would have PlannerInfo.unique_exprs as below\n>> [\n>> [EC(a, c), EC(b)],\n>> [EC(pk)]\n>> ]\n>>\n>> RelOptInfo(t1) would have 2 UniqueKeys.\n>> UniqueKey1 {unique_expr_indexes=bms{0}, multinull=false]\n>> UniqueKey2 {unique_expr_indexes=bms{1}, multinull=false]\n>>\n>> The design will benefit many table joins cases. For instance a 10- tables join,\n>> each table has a primary key (a, b). Then we would have a UniqueKey like\n>> this.\n>>\n>> JoinRel{1,2,3,4} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b}\n>> JoinRel{1,2,3,4,5} - {t1.a, t1.b, t2.a, t2.b, t3.a, t3.b t4.a t4.b t5.a t5.b}\n>>\n>> When more tables are joined, the list would be longer and longer, build the list\n>> consumes both CPU cycles and memory.\n>>\n>> With the method as above, we can store it as:\n>>\n>> root->unique_exprs = /* All the UniqueKey is stored once */\n>> [\n>> [t1.a, t1.b], -- EC is ignored in document.\n>> [t2.a, t2.b],\n>> [t3.a, t3.b],\n>> [t4.a, t4.b],\n>> [t5.a, t5.b],\n>> [t6.a, t6.b],\n>> [t7.a, t7.b],\n>> [t8.a, t8.b],\n>> [t9.a, t9.b],\n>> [t10.a, t10.b],\n>> ]\n>>\n>> JoinRel{1,2,3,4} - Bitmapset{0,1,2,3} -- one bitmapword.\n>> JoinRel{1,2,3,4,5} - Bitmapset{0,1,2,3,4} -- one bitmapword.\n>>\n>> The member in the bitmap is the index of root->unique_exprs rather than\n>> root->eq_classes because we need to store the SingleRow case in\n>> root->unqiue_exprs as well.\n>>\n>> 3. Define a new SingleRow node and use it in joinrel as well.\n>>\n>> +typedef struct SingleRow\n>> +{\n>> + NodeTag type;\n>> + Index relid;\n>> +} SingleRow;\n>>\n>> SELECT * FROM t1, t2 WHERE t2.pk = 1;\n>>\n>> root->unique_exprs\n>> [\n>> [t1.a, t1.b],\n>> SingleRow{relid=2}\n>> ]\n>>\n>> JoinRel{t1} - Bitmapset{0}\n>> JoinRel{t2} - Bitmapset{1}\n>>\n>> JoinRelt{1, 2} Bitmapset{0, 1} -- SingleRow will never be expanded to dedicated\n>> exprs.\n>>\n>> 4. Interesting UniqueKey to remove the Useless UniqueKey as soon as possible.\n>>\n>> The overall idea is similar with PathKey, I distinguish the UniqueKey for\n>> distinct purpose and useful_for_merging purpose.\n>>\n>> SELECT DISTINCT pk FROM t; -- OK, maintain it all the time, just like\n>> root->query_pathkey.\n>>\n>> SELECT DISTINCT t2.c FROM t1, t2 WHERE t1.d = t2.pk; -- T2's UniqueKey PK is\n>> use before t1 join t2, but not useful after it.\n>>\n>> Currently the known issue I didn't pass the \"interesting UniqueKey\" info to\n>> subquery well [2], I'd like to talk more about this when we discuss about\n>> UnqiueKey on subquery part.\n>>\n>>\n>> 5. relation_is_distinct_for\n>>\n>> Now I design the function as\n>>\n>> + bool\n>> + relation_is_distinct_for(PlannerInfo *root, RelOptInfo *rel, List\n>> *distinct_pathkey)\n>>\n>> It is \"List *distinct_pathkey\", rather than \"List *exprs\". The reason pathkey\n>> has EC in it, and all the UniqueKey has EC as well. if so, the subset-like\n>> checking is very effective. As for the distinct/group as no-op case, we have\n>> pathkey all the time. The only drawback of it is some clauses are not-sortable,\n>> in this case, the root->distinct_pathkey and root->group_pathkey is not\n>> set. However it should be rare by practice, so I ignore this part for\n>> now. After all, I can have a relation_is_disticnt_for version for Exprs. I just\n>> not implemented it so far.\n>>\n>> 6. EC overhead in UnqiueKey & UNION case.\n>>\n>> Until now I didn't create any new EC for the UniqueKey case, I just used the\n>> existing ones. However I can't handle the case like\n>>\n>> SELECT a, b FROM t1\n>> UNION\n>> SELECT x, y FROM t2;\n>>\n>> In this case, there is no EC created with existing code. and I don't want to\n>> create them for the UniqueKey case as well. so my plan is just not to handle\n>> the case for UNION.\n>>\n>> Since we need some big effort from the reviewer, I split the patch into many\n>> smaller chunks.\n>>\n>> Patch 1 / Patch 2: I just split some code which UniqueKey uses but not very\n>> interrelated. Splitting them out to reduce the core patch size.\n>> Patch 3: still the notnull stuff. This one doesn't play a big role overall,\n>> even if the design is changed at last, we can just modify some small stuff. IMO,\n>> I don't think it is a blocker issue to move on.\n>> Patch 4: Support the UniqueKey for baserel.\n>> Patch 5: Support the UniqueKey for joinrel.\n>> Patch 6: Support the UniqueKey for some upper relation, like distinctrel,\n>> groupby rel.\n>>\n>> I'd suggest moving on like this:\n>> 1. Have an overall review to see if any outstanding issues the patch has.\n>> 2. If not, we can review and commit patch 1 & patch 2 to reduce the patch size.\n>> 3. Decide which method to take for not null stuff. IMO Tom's method\n>> would be a bit\n>> complex and the benefit is not very clear to me[1]. So the choices\n>> include: a). move on UniqueKey stuff until Tom's method is ready. b). Move\n>> on the UniqueKey with my notnull way, and changes to Tom's method when\n>> necessary. I prefer method b).\n>> 4. Review & Commit the UniqueKey for BaseRel part.\n>> 5. Review & Commit the UniqueKey for JoinRel part.\n>> 6. Review & Commit the UniqueKey for SubQuery part *without* the Interesting\n>> UniqueKey well handled.\n>> 7. Review & Commit the UniqueKey for SubQuery part *with* the Interesting\n>> UniqueKey well handled.\n>> 8. Discuss about the UniqueKey on partitioned rel case and develop/review/commit\n>> it.\n>> 9. Apply UniqueKey stuff on more user cases rather than DISTINCT.\n>>\n>> What do you think about this?\n>>\n>> [1] https://www.postgresql.org/message-id/CAApHDvo5b2pYX%2BdbPy%2BysGBSarezRSfXthX32TZNFm0wPdfKGQ%40mail.gmail.com\n>> [2] https://www.postgresql.org/message-id/CAKU4AWo6-%3D9mg3UQ5UJhGCMw6wyTPyPGgV5oh6dFvwEN%3D%2Bhb_w%40mail.gmail.com\n>>\n>>\n>> Thanks\n>\n> Hi,\n> For v3-0005-Support-UniqueKey-on-JoinRel.patch :\n>\n> +static void populate_joinrel_composited_uniquekey(PlannerInfo *root,\n>\n> populate_joinrel_composited_uniquekey -> populate_joinrel_composite_uniquekey (without the trailing d for composite)\n>\n> For populate_joinrel_uniquekeys():\n>\n> + foreach(lc, outerrel->uniquekeys)\n> + {\n> ...\n> + return;\n>\n> Should the remaining unique keys be considered ?\n>\n> For populate_joinrel_uniquekey_for_rel():\n>\n> + else if (bms_equal(r->right_relids, rel->relids) && r->left_ec != NULL)\n> + {\n> + other_ecs = lappend(other_ecs, r->right_ec);\n> + other_relids = bms_add_members(other_relids, r->left_relids);\n>\n> It seems the append to other_ecs is the same as the one for the `bms_equal(r->left_relids, rel->relids) && r->right_ec != NULL` case. Is this correct ?\n>\n\nCorrect, both of them are bugs, I will fix them in the next version,\nincluding the \"tailing d\".\nThanks for your review!\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Mon, 16 Aug 2021 10:24:24 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Hi:\n\npatch v4 fixed the 2 bugs Zhihong reported.\n\nAny feedback is welcome.", "msg_date": "Wed, 18 Aug 2021 20:29:33 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Tue, Jul 6, 2021 at 9:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Tom, I'm wondering if you might get a chance to draw up a design for\n> > what you've got in mind with this? I assume adding a new field in\n> > Var, but I'm drawing a few blanks on how things might work for equal()\n> > when one Var has the field set and another does not.\n>\n> As I said before, it hasn't progressed much past the handwaving stage,\n> but it does seem like it's time to get it done. I doubt I'll have any\n> cycles for it during the commitfest, but maybe I can devote a block of\n> time during August.\n>\n> regards, tom lane\n\nHi Tom: do you get a chance to work on this? Looks like we have to fix\nthis one before we can move on to the uniquekey stuff.\n\n-- \nBest Regards\nAndy Fan\n\n\n", "msg_date": "Mon, 30 Aug 2021 16:59:17 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Unknown why people have so little interest in this, AFAICS, there are more great\nusages of UniqueKey rather than the 'marking-distinct-as-noop'. The most\nexciting usage for me is it is helpful for JoinRel's pathkey.\n\nTake an example of this:\n\nSELECT .. FROM t1 JOIN t2 ON t1.any_column = t2.uniquekey;\nSELECT .. FROM t1 LEFT JOIN t2 ON t1.any_column = t2.uniquekey;\n\nSuppose before the join, t1 has a pathkey X, t2 has PathKey y. Then\n(t1.X, t2.Y) is\nordered as well for JoinRel(t1, t2). Then the pathkey of JoinRel(t1,\nt2) has a lot\nof usage again. Currently after the joining, only the outer join's\npathkey is maintained.\n\nAs for the extra planning cost of this, it looks like our current\ninfrastructure can support it\nwell since we know all the information when we generate the pathkey\nfor the Join Path.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Tue, 9 Nov 2021 10:16:55 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "> On Wed, Jul 07, 2021 at 01:20:24PM +1200, David Rowley wrote:\n> On Wed, 7 Jul 2021 at 13:04, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Looking forward to watching this change closely, thank you both David and Tom!\n> > But I still don't understand what the faults my way have , do you mind telling the\n> > details?\n>\n> The problem is that we don't need 6 different ways to determine if a\n> Var can be NULL or not. You're proposing to add a method using\n> Bitmapsets and Tom has some proposing ideas around tracking\n> nullability in Vars. We don't need both.\n>\n> It seems to me that having it in Var allows us to have a much finer\n> gradient about where exactly a Var can be NULL.\n>\n> For example: SELECT nullablecol FROM tab WHERE nullablecol = <value>;\n>\n> If the equality operator is strict then the nullablecol can be NULL in\n> the WHERE clause but not in the SELECT list. Tom's idea should allow\n> us to determine both of those things but your idea cannot tell them\n> apart, so, in theory at least, Tom's idea seems better to me.\n\nHi,\n\nThis patch still occupies some place in my head, so I would like to ask few\nquestions to see where it's going:\n\n* From the last emails in this thread I gather that the main obstacle from the\n design side of things is functionality around figuring out if a Var could be\n NULL or not, and everyone is waiting for a counterproposal about how to do\n that better. Is that correct?\n\n* Is this thread only about notnullattrs field in RelOptInfo, or about the\n UniqueKeys patch series after all? The title indicates the first one, but the\n last posted patch series included everything as far as I can see.\n\n* Putting my archaeologist's hat on, I've tried to find out what this\n alternative proposal was about. The result findings are scattered through the\n archives -- which proves that it's a hard topic indeed -- and participants of\n this thread are probably more aware about them than I am. The most detailed\n handwaving I found in the thread [1], with an idea to introduce NullableVar\n wrapper created by parser, is that it? It makes more clear why such approach\n could be more beneficial than a new field in RelOptInfo. And if the thread is\n only about the notnullattrs, I guess it would be indeed enough to object.\n\n* Now, how essential is notnullattrs functionality for the UniqueKeys patch\n series? From what I understand, it's being used to set multi_nulls field of\n every UniqueKey to indicate whether this key could produce NULL or not (which\n means no guaranties about uniqueness could be provided). Is there a way to\n limit the scope of the patch series and introduce UniqueKeys without require\n multi_nulls at all, or (again, in some limited situations) fetch necessary\n information somehow on the fly e.g. only from catcache without introducing\n any new infrastructure?\n\n[1]: https://www.postgresql.org/message-id/25142.1580847861%40sss.pgh.pa.us\n\n\n", "msg_date": "Wed, 17 Nov 2021 16:21:44 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "Hi Dmitry:\n\nOn Wed, Nov 17, 2021 at 11:20 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Wed, Jul 07, 2021 at 01:20:24PM +1200, David Rowley wrote:\n> > On Wed, 7 Jul 2021 at 13:04, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > > Looking forward to watching this change closely, thank you both David and Tom!\n> > > But I still don't understand what the faults my way have , do you mind telling the\n> > > details?\n> >\n> > The problem is that we don't need 6 different ways to determine if a\n> > Var can be NULL or not. You're proposing to add a method using\n> > Bitmapsets and Tom has some proposing ideas around tracking\n> > nullability in Vars. We don't need both.\n> >\n> > It seems to me that having it in Var allows us to have a much finer\n> > gradient about where exactly a Var can be NULL.\n> >\n> > For example: SELECT nullablecol FROM tab WHERE nullablecol = <value>;\n> >\n> > If the equality operator is strict then the nullablecol can be NULL in\n> > the WHERE clause but not in the SELECT list. Tom's idea should allow\n> > us to determine both of those things but your idea cannot tell them\n> > apart, so, in theory at least, Tom's idea seems better to me.\n>\n> Hi,\n>\n> This patch still occupies some place in my head, so I would like to ask few\n> questions to see where it's going:\n\nThanks for that!\n\n> * From the last emails in this thread I gather that the main obstacle from the\n> design side of things is functionality around figuring out if a Var could be\n> NULL or not, and everyone is waiting for a counterproposal about how to do\n> that better. Is that correct?\n\nThat is correct.\n\n> * Is this thread only about notnullattrs field in RelOptInfo, or about the\n> UniqueKeys patch series after all? The title indicates the first one, but the\n> last posted patch series included everything as far as I can see.\n\nThis thread is talking about the path series after all. Not null maintenance\nis the first step of the UniqueKey patch. If the not null issue can't\nbe addressed,\nthe overall UniqueKey patch would be hopeless.\n\n> * Putting my archaeologist's hat on, I've tried to find out what this\n> alternative proposal was about. The result findings are scattered through the\n> archives -- which proves that it's a hard topic indeed -- and participants of\n> this thread are probably more aware about them than I am. The most detailed\n> handwaving I found in the thread [1], with an idea to introduce NullableVar\n> wrapper created by parser, is that it? It makes more clear why such approach\n> could be more beneficial than a new field in RelOptInfo. And if the thread is\n> only about the notnullattrs, I guess it would be indeed enough to object.\n>\n> * Now, how essential is notnullattrs functionality for the UniqueKeys patch\n> series? From what I understand, it's being used to set multi_nulls field of\n> every UniqueKey to indicate whether this key could produce NULL or not (which\n> means no guaranties about uniqueness could be provided). Is there a way to\n> limit the scope of the patch series and introduce UniqueKeys without require\n> multi_nulls at all, or (again, in some limited situations) fetch necessary\n> information somehow on the fly e.g. only from catcache without introducing\n> any new infrastructure?\n>\n\n_I_ think there is no way to bypass that. I guess Tom has a bigger plan on\nVar (not only for notnull), but no time to invest in them so far. If\nthat is the case,\npersonally I think we can go ahead with my method first and continue the review\nprocess.\n\n\n-- \nBest Regards\nAndy Fan\n\n\n", "msg_date": "Mon, 22 Nov 2021 15:13:50 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" }, { "msg_contents": "On Mon, 22 Nov 2021 at 02:14, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> _I_ think there is no way to bypass that. I guess Tom has a bigger plan on\n> Var (not only for notnull), but no time to invest in them so far. If\n> that is the case,\n> personally I think we can go ahead with my method first and continue the review\n> process.\n\nThis discussion has gone on for two years now and meandered into\ndifferent directions. There have been a number of interesting\nproposals and patches in that time but it's not clear now what patch\nis even under consideration and what questions remain for it. And I\nthink this message from last November is the last comment on it so I\nwonder if it's reached a bit of an impasse.\n\nI think I would suggest starting a fresh thread with a patch distilled\nfrom the previous discussions. Then once that's settled repeat with\nadditional patches, keeping the discussion focused just on the current\nchange.\n\nPersonally I think these kinds of optimizations are important because\nthey're what allow people to use SQL without micro-optimizing each\nquery.\n\n-- \ngreg\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:33:20 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Keep notnullattrs in RelOptInfo (Was part of UniqueKey patch\n series)" } ]
[ { "msg_contents": "Hi,\n\nI need to filter out any system catalog objects from SQL,\nand I've learned it's not possible to simply filter based on namespace name,\nsince there are objects such as pg_am that don't have any namespace belonging,\nexcept indirectly via their handler, but since you can define a new access method\nusing an existing handler from the system catalog, there is no way to distinguish\nyour user created access handler from the system catalog access handlers\nonly based on namespace name based filtering.\n\nAfter some digging I found this\n\n #define FirstNormalObjectId 16384\n\nin src/include/access/transam.h, which pg_dump.c and 14 other files are using at 27 different places in the sources.\n\nSeems to be a popular and important fellow.\n\nI see this value doesn't change often, it was added back in 2005-04-13 in commit 2193a856a229026673cbc56310cd0bddf7b5ea25.\n\nIs it safe to just hard-code in application code needing to know this cut-off value?\n\nOr will we have a Bill Gates \"640K ought to be enough for anybody\" moment in the foreseeable future,\nwhere this limit needs to be increased?\n\nIf there is a risk we will, then maybe we should add a function such as $SUBJECT to expose this value to SQL users who needs it?\n\nI see there has been a related discussion in the thread \"Identifying user-created objects\"\n\n https://www.postgresql.org/message-id/flat/CA%2Bfd4k7Zr%2ByQLYWF3O_KjAJyYYUZFBZ_dFchfBvq5bMj9GgKQw%40mail.gmail.com\n\nHowever, this thread focused on security and wants to know if a specific oid is user defined or not.\n\nI think pg_get_first_normal_oid() would be more useful than pg_is_user_object(oid),\nsince with pg_get_first_normal_oid() you could do filtering based on oid indexes.\n\nCompare e.g.:\n\nSELECT * FROM pg_class WHERE oid >= pg_get_first_normal_oid()\n\nwith..\n\nSELECT * FROM pg_class WHERE pg_is_user_object(oid) IS TRUE\n\nThe first query could use the index on pg_class.oid,\nwhereas I'm not mistaken, the second query would need a seq_scan to evaluate pg_is_user_object() for each oid.\n\n/Joel\nHi,I need to filter out any system catalog objects from SQL,and I've learned it's not possible to simply filter based on namespace name,since there are objects such as pg_am that don't have any namespace belonging,except indirectly via their handler, but since you can define a new access methodusing an existing handler from the system catalog, there is no way to distinguishyour user created access handler from the system catalog access handlersonly based on namespace name based filtering.After some digging I found this   #define FirstNormalObjectId\t\t16384in src/include/access/transam.h, which pg_dump.c and 14 other files are using at 27 different places in the sources.Seems to be a popular and important fellow.I see this value doesn't change often, it was added back in 2005-04-13 in commit 2193a856a229026673cbc56310cd0bddf7b5ea25.Is it safe to just hard-code in application code needing to know this cut-off value?Or will we have a Bill Gates \"640K ought to be enough for anybody\" moment in the foreseeable future,where this limit needs to be increased?If there is a risk we will, then maybe we should add a function such as $SUBJECT to expose this value to SQL users who needs it?I see there has been a related discussion in the thread \"Identifying user-created objects\"    https://www.postgresql.org/message-id/flat/CA%2Bfd4k7Zr%2ByQLYWF3O_KjAJyYYUZFBZ_dFchfBvq5bMj9GgKQw%40mail.gmail.comHowever, this thread focused on security and wants to know if a specific oid is user defined or not.I think pg_get_first_normal_oid() would be more useful than pg_is_user_object(oid),since with pg_get_first_normal_oid() you could do filtering based on oid indexes.Compare e.g.:SELECT * FROM pg_class WHERE oid >= pg_get_first_normal_oid()with..SELECT * FROM pg_class WHERE pg_is_user_object(oid) IS TRUEThe first query could use the index on pg_class.oid,whereas I'm not mistaken, the second query would need a seq_scan to evaluate pg_is_user_object() for each oid./Joel", "msg_date": "Wed, 10 Feb 2021 08:43:09 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "pg_get_first_normal_oid()?" }, { "msg_contents": "On Wed, Feb 10, 2021 at 4:43 PM Joel Jacobson <joel@compiler.org> wrote:\n>\n> Hi,\n>\n> I need to filter out any system catalog objects from SQL,\n> and I've learned it's not possible to simply filter based on namespace name,\n> since there are objects such as pg_am that don't have any namespace belonging,\n> except indirectly via their handler, but since you can define a new access method\n> using an existing handler from the system catalog, there is no way to distinguish\n> your user created access handler from the system catalog access handlers\n> only based on namespace name based filtering.\n>\n> After some digging I found this\n>\n> #define FirstNormalObjectId 16384\n>\n> in src/include/access/transam.h, which pg_dump.c and 14 other files are using at 27 different places in the sources.\n>\n> Seems to be a popular and important fellow.\n>\n> I see this value doesn't change often, it was added back in 2005-04-13 in commit 2193a856a229026673cbc56310cd0bddf7b5ea25.\n>\n> Is it safe to just hard-code in application code needing to know this cut-off value?\n>\n> Or will we have a Bill Gates \"640K ought to be enough for anybody\" moment in the foreseeable future,\n> where this limit needs to be increased?\n>\n> If there is a risk we will, then maybe we should add a function such as $SUBJECT to expose this value to SQL users who needs it?\n>\n> I see there has been a related discussion in the thread \"Identifying user-created objects\"\n>\n> https://www.postgresql.org/message-id/flat/CA%2Bfd4k7Zr%2ByQLYWF3O_KjAJyYYUZFBZ_dFchfBvq5bMj9GgKQw%40mail.gmail.com\n\nAs mentioned in that thread, it's still hard to distinguish between\nuser objects and system objects using only OID since we can create\nobjects with OID lower than FirstNormalObjectId by creating objects in\nsingle-user mode. It was not enough for security purposes. I think\nproviding concrete use cases of the function would support this\nproposal.\n\n>\n> However, this thread focused on security and wants to know if a specific oid is user defined or not.\n>\n> I think pg_get_first_normal_oid() would be more useful than pg_is_user_object(oid),\n> since with pg_get_first_normal_oid() you could do filtering based on oid indexes.\n>\n> Compare e.g.:\n>\n> SELECT * FROM pg_class WHERE oid >= pg_get_first_normal_oid()\n>\n> with..\n>\n> SELECT * FROM pg_class WHERE pg_is_user_object(oid) IS TRUE\n>\n> The first query could use the index on pg_class.oid,\n> whereas I'm not mistaken, the second query would need a seq_scan to evaluate pg_is_user_object() for each oid.\n\nYes. I've also considered the former approach but I prioritized\nreadability and extensibility; it requires prior knowledge for users\nthat OIDs greater than the first normal OID are used during normal\nmulti-user operation. Also in a future if we have similar functions\nfor other OID bounds such as FirstGenbkiObjectId and\nFirstBootstrapObjectId we will end up doing like 'WHERE oid >=\npg_get_first_bootstrap_oid() and oid < pg_get_first_normal_oid()',\nwhich is not intuitive.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 10 Feb 2021 20:46:42 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_get_first_normal_oid()?" } ]
[ { "msg_contents": "Hi\n My company is looking for a team of developers to implement the \"flashback database\" functionality in PostgreSQL.\n Do you think it's feasible to implement? how many days of development?\n\n Thanks in advance\n\nBest Regards\nDidier ROS\nE.D.F\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.", "msg_date": "Wed, 10 Feb 2021 08:56:32 +0000", "msg_from": "ROS Didier <didier.ros@edf.fr>", "msg_from_op": true, "msg_subject": "PostgreSQL and Flashback Database" }, { "msg_contents": "Hi Didier,\n\nHave you ever had a look at the E-Maj extension.\n\nDepending on the features you are really looking for, it may fit the needs.\n\nHere are some pointers :\n\n- github repo for the extension : https://github.com/dalibo/emaj\n\n- github repo for the web client : https://github.com/dalibo/emaj_web\n\n- online documentation : https://emaj.readthedocs.io/en/latest/ \n<https://emaj.readthedocs.io/en/v3.4.0/> or even in French, espacially \nfor you ;-) https://emaj.readthedocs.io/fr/latest/\n\nFeel free to contact me to talk about it.\n\nBest regards.\n\nPhilippe.\n\nLe 10/02/2021 � 09:56, ROS Didier a �crit�:\n>\n> Hi\n>\n> My company is looking for a team of developers to implement the \n> \"flashback database\" functionality in PostgreSQL.\n>\n> ������������� Do you think it's feasible to implement? how many days \n> of development?\n>\n> ������������� Thanks in advance\n>\n> Best Regards\n>\n> Didier ROS\n>\n> E.D.F\n>\n>\n>\n> Ce message et toutes les pi�ces jointes (ci-apr�s le 'Message') sont \n> �tablis � l'intention exclusive des destinataires et les informations \n> qui y figurent sont strictement confidentielles. Toute utilisation de \n> ce Message non conforme � sa destination, toute diffusion ou toute \n> publication totale ou partielle, est interdite sauf autorisation expresse.\n>\n> Si vous n'�tes pas le destinataire de ce Message, il vous est interdit \n> de le copier, de le faire suivre, de le divulguer ou d'en utiliser \n> tout ou partie. Si vous avez re�u ce Message par erreur, merci de le \n> supprimer de votre syst�me, ainsi que toutes ses copies, et de n'en \n> garder aucune trace sur quelque support que ce soit. Nous vous \n> remercions �galement d'en avertir imm�diatement l'exp�diteur par \n> retour du message.\n>\n> Il est impossible de garantir que les communications par messagerie \n> �lectronique arrivent en temps utile, sont s�curis�es ou d�nu�es de \n> toute erreur ou virus.\n> ____________________________________________________\n>\n> This message and any attachments (the 'Message') are intended solely \n> for the addressees. The information contained in this Message is \n> confidential. Any use of information contained in this Message not in \n> accord with its purpose, any dissemination or disclosure, either whole \n> or partial, is prohibited except formal approval.\n>\n> If you are not the addressee, you may not copy, forward, disclose or \n> use any part of it. If you have received this message in error, please \n> delete it and all copies from your system and notify the sender \n> immediately by return message.\n>\n> E-mail communication cannot be guaranteed to be timely secure, error \n> or virus-free.\n>\n------------------------------------------------------------------------\n<https://www.dalibo.com/>\n*DALIBO*\n*L'expertise PostgreSQL*\n43, rue du Faubourg Montmartre\n75009 Paris \t*Philippe Beaudoin*\n*Consultant Avant-Vente*\n+33 (0)1 84 72 76 11\n+33 (0)7 69 14 67 21\nphilippe.beaudoin@dalibo.com\nValorisez vos comp�tences PostgreSQL, faites-vous certifier par Dalibo \n<https://certification.dalibo.com/> !", "msg_date": "Fri, 12 Feb 2021 14:15:42 +0100", "msg_from": "Philippe Beaudoin <philippe.beaudoin@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Flashback Database" } ]
[ { "msg_contents": "Hi\n\nIs there some reason why \\copy statement (parse_slash_copy parser) doesn't\nsupport psql variables?\n\nRegards\n\nPavel\n\nHiIs there some reason why \\copy statement (parse_slash_copy parser) doesn't support psql variables?RegardsPavel", "msg_date": "Wed, 10 Feb 2021 14:33:11 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "parse_slash_copy doesn't support psql variables substitution" }, { "msg_contents": "On Wed, Feb 10, 2021 at 8:33 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> Is there some reason why \\copy statement (parse_slash_copy parser) doesn't\n> support psql variables?\n>\n> Regards\n>\n> Pavel\n>\n\nI remember wondering about that when I was working on the \\if stuff. I dug\ninto it a bit, but the problem was out of scope for my goals.\n\nThe additional options recently added to \\g reduced my need for \\copy, and\nit seemed liked there was some effort to have input pipes as well, that\nwould eliminate the need for \\copy altogether.\n\nOn Wed, Feb 10, 2021 at 8:33 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiIs there some reason why \\copy statement (parse_slash_copy parser) doesn't support psql variables?RegardsPavelI remember wondering about that when I was working on the \\if stuff. I dug into it a bit, but the problem was out of scope for my goals.The additional options recently added to \\g reduced my need for \\copy, and it seemed liked there was some effort to have input pipes as well, that would eliminate the need for \\copy altogether.", "msg_date": "Thu, 11 Feb 2021 18:36:49 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: parse_slash_copy doesn't support psql variables substitution" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nThe functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.c\nonly are safe to call if the variable \"ri_RangeTableIndex\" is != 0.\n\nOtherwise a possible Dereference after null check (FORWARD_NULL) can be\nraised.\n\nExemple:\n\nvoid\n1718ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo,\n1719 TupleTableSlot *\nslot,\n1720 EState *estate)\n1721{\n1722 Oid root_relid;\n1723 TupleDesc tupdesc;\n1724 char *val_desc;\n1725 Bitmapset *modifiedCols;\n1726\n1727 /*\n1728\n * If the tuple has been routed, it's been converted to the partition's\n1729\n * rowtype, which might differ from the root table's. We must\nconvert it\n1730\n * back to the root table's rowtype so that val_desc in the\nerror message\n1731 * matches the input tuple.\n1732 */\n\n1. Condition resultRelInfo->ri_RootResultRelInfo, taking false branch.\n\n2. var_compare_op: Comparing resultRelInfo->ri_RootResultRelInfo to null\nimplies that resultRelInfo->ri_RootResultRelInfo might be null.\n1733 if (resultRelInfo->ri_RootResultRelInfo)\n1734 {\n1735 ResultRelInfo *rootrel = resultRelInfo->\nri_RootResultRelInfo;\n1736 TupleDesc old_tupdesc;\n1737 AttrMap *map;\n1738\n1739 root_relid = RelationGetRelid(rootrel->ri_RelationDesc);\n1740 tupdesc = RelationGetDescr(rootrel->ri_RelationDesc);\n1741\n1742 old_tupdesc = RelationGetDescr(resultRelInfo->\nri_RelationDesc);\n1743 /* a reverse map */\n1744 map = build_attrmap_by_name_if_req(old_tupdesc, tupdesc\n);\n1745\n1746 /*\n1747\n * Partition-specific slot's tupdesc can't be changed,\nso allocate a\n1748 * new one.\n1749 */\n1750 if (map != NULL)\n1751 slot = execute_attr_map_slot(map, slot,\n1752\n\nMakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\n1753 modifiedCols = bms_union(ExecGetInsertedCols(rootrel,\nestate),\n1754\nExecGetUpdatedCols(rootrel, estate));\n1755 }\n1756 else\n1757 {\n1758 root_relid = RelationGetRelid(resultRelInfo->\nri_RelationDesc);\n1759 tupdesc = RelationGetDescr(resultRelInfo->\nri_RelationDesc);\n\nCID 1446241 (#1 of 1): Dereference after null check (FORWARD_NULL)3.\nvar_deref_model: Passing resultRelInfo to ExecGetInsertedCols, which\ndereferences null resultRelInfo->ri_RootResultRelInfo. [show details\n<https://scan6.coverity.com/eventId=32356039-3&modelId=32356039-0&fileInstanceId=112435213&filePath=%2Fdll%2Fpostgres%2Fsrc%2Fbackend%2Fexecutor%2FexecUtils.c&fileStart=1230&fileEnd=1255>\n]\n1760 modifiedCols = bms_union(ExecGetInsertedCols(\nresultRelInfo, estate),\n1761\nExecGetUpdatedCols(resultRelInfo, estate));\n1762 }\n1763\n1764 val_desc = ExecBuildSlotValueDescription(root_relid,\n1765\n\nslot,\n1766\n\ntupdesc,\n1767\n\nmodifiedCols,\n1768\n\n64);\n1769 ereport(ERROR,\n1770 (errcode(ERRCODE_CHECK_VIOLATION),\n1771 errmsg(\n\"new row for relation \\\"%s\\\" violates partition constraint\",\n1772\n\nRelationGetRelationName(resultRelInfo->ri_RelationDesc)),\n1773 val_desc ? errdetail(\"Failing row contains %s.\"\n, val_desc) : 0,\n1774 errtable(resultRelInfo->ri_RelationDesc)));\n1775}\n\nregards,\nRanier Viela\n\nHi,Per Coverity.The functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.conly are safe to call if the variable \"ri_RangeTableIndex\" is  != 0.Otherwise a possible \nDereference after null check (FORWARD_NULL) can be raised.Exemple:\nvoid\n1718ExecPartitionCheckEmitError(ResultRelInfo *resultRelInfo,\n1719                                                        TupleTableSlot *slot,\n1720                                                        EState *estate)\n1721{\n1722        Oid                     root_relid;\n1723        TupleDesc       tupdesc;\n1724        char       *val_desc;\n1725        Bitmapset  *modifiedCols;\n1726\n1727        /*\n1728         * If the tuple has been routed, it's been converted to the partition's\n1729         * rowtype, which might differ from the root table's.  We must convert it\n1730         * back to the root table's rowtype so that val_desc in the error message\n1731         * matches the input tuple.\n1732         */\n    1. Condition resultRelInfo->ri_RootResultRelInfo, taking false branch.    2. var_compare_op: Comparing resultRelInfo->ri_RootResultRelInfo to null implies that resultRelInfo->ri_RootResultRelInfo might be null.1733        if (resultRelInfo->ri_RootResultRelInfo)\n1734        {\n1735                ResultRelInfo *rootrel = resultRelInfo->ri_RootResultRelInfo;\n1736                TupleDesc       old_tupdesc;\n1737                AttrMap    *map;\n1738\n1739                root_relid = RelationGetRelid(rootrel->ri_RelationDesc);\n1740                tupdesc = RelationGetDescr(rootrel->ri_RelationDesc);\n1741\n1742                old_tupdesc = RelationGetDescr(resultRelInfo->ri_RelationDesc);\n1743                /* a reverse map */\n1744                map = build_attrmap_by_name_if_req(old_tupdesc, tupdesc);\n1745\n1746                /*\n1747                 * Partition-specific slot's tupdesc can't be changed, so allocate a\n1748                 * new one.\n1749                 */\n1750                if (map != NULL)\n1751                        slot = execute_attr_map_slot(map, slot,\n1752                                                                                 MakeTupleTableSlot(tupdesc, &TTSOpsVirtual));\n1753                modifiedCols = bms_union(ExecGetInsertedCols(rootrel, estate),\n1754                                                                 ExecGetUpdatedCols(rootrel, estate));\n1755        }\n1756        else\n1757        {\n1758                root_relid = RelationGetRelid(resultRelInfo->ri_RelationDesc);\n1759                tupdesc = RelationGetDescr(resultRelInfo->ri_RelationDesc);\n    CID 1446241 (#1 of 1): Dereference after null check (FORWARD_NULL)3. var_deref_model: Passing resultRelInfo to ExecGetInsertedCols, which dereferences null resultRelInfo->ri_RootResultRelInfo. [show details]1760                modifiedCols = bms_union(ExecGetInsertedCols(resultRelInfo, estate),\n1761                                                                 ExecGetUpdatedCols(resultRelInfo, estate));\n1762        }\n1763\n1764        val_desc = ExecBuildSlotValueDescription(root_relid,\n1765                                                                                         slot,\n1766                                                                                         tupdesc,\n1767                                                                                         modifiedCols,\n1768                                                                                         64);\n1769        ereport(ERROR,\n1770                        (errcode(ERRCODE_CHECK_VIOLATION),\n1771                         errmsg(\"new row for relation \\\"%s\\\" violates partition constraint\",\n1772                                        RelationGetRelationName(resultRelInfo->ri_RelationDesc)),\n1773                         val_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0,\n1774                         errtable(resultRelInfo->ri_RelationDesc)));\n1775}regards,Ranier Viela", "msg_date": "Wed, 10 Feb 2021 19:54:46 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible dereference after null check\n (src/backend/executor/ExecUtils.c)" }, { "msg_contents": "At Wed, 10 Feb 2021 19:54:46 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Per Coverity.\n> \n> The functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.c\n> only are safe to call if the variable \"ri_RangeTableIndex\" is != 0.\n> \n> Otherwise a possible Dereference after null check (FORWARD_NULL) can be\n> raised.\n\nAs it turns out, it's a false positive. And perhaps we don't want to\ntake action just to satisfy the static code analyzer.\n\n\nThe coment in ExecGetInsertedCols says:\n\n> /*\n> * The columns are stored in the range table entry. If this ResultRelInfo\n> * doesn't have an entry in the range table (i.e. if it represents a\n> * partition routing target), fetch the parent's RTE and map the columns\n> * to the order they are in the partition.\n> */\n> if (relinfo->ri_RangeTableIndex != 0)\n> {\n\nThis means that any one of the two is always usable here. AFAICS,\nactually, ri_RangeTableIndex is non-zero for partitioned (=leaf) and\nnon-partitoned relations and ri_RootResultRelInfo is non-null for\npartitioned (parent or intermediate) relations (since they don't have\na coressponding range table entry).\n\nThe only cases where both are 0 and NULL are trigger-use, which is\nunrelated to the code path.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Feb 2021 15:28:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible dereference after null check\n (src/backend/executor/ExecUtils.c)" }, { "msg_contents": "Em sex., 12 de fev. de 2021 às 03:28, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Wed, 10 Feb 2021 19:54:46 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi,\n> >\n> > Per Coverity.\n> >\n> > The functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.c\n> > only are safe to call if the variable \"ri_RangeTableIndex\" is != 0.\n> >\n> > Otherwise a possible Dereference after null check (FORWARD_NULL) can be\n> > raised.\n>\n> As it turns out, it's a false positive. And perhaps we don't want to\n> take action just to satisfy the static code analyzer.\n>\n>\n> The coment in ExecGetInsertedCols says:\n>\n> > /*\n> > * The columns are stored in the range table entry. If this ResultRelInfo\n> > * doesn't have an entry in the range table (i.e. if it represents a\n> > * partition routing target), fetch the parent's RTE and map the columns\n> > * to the order they are in the partition.\n> > */\n> > if (relinfo->ri_RangeTableIndex != 0)\n> > {\n>\n> This means that any one of the two is always usable here. AFAICS,\n> actually, ri_RangeTableIndex is non-zero for partitioned (=leaf) and\n> non-partitoned relations and ri_RootResultRelInfo is non-null for\n> partitioned (parent or intermediate) relations (since they don't have\n> a coressponding range table entry).\n>\n> The only cases where both are 0 and NULL are trigger-use, which is\n> unrelated to the code path.\n>\nThis is a case where it would be worth an assertion.\nWhat do you think?\n\nregards,\nRanier Vilela\n\nEm sex., 12 de fev. de 2021 às 03:28, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Wed, 10 Feb 2021 19:54:46 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Per Coverity.\n> \n> The functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.c\n> only are safe to call if the variable \"ri_RangeTableIndex\" is  != 0.\n> \n> Otherwise a possible Dereference after null check (FORWARD_NULL) can be\n> raised.\n\nAs it turns out, it's a false positive. And perhaps we don't want to\ntake action just to satisfy the static code analyzer.\n\n\nThe coment in ExecGetInsertedCols says:\n\n> /*\n>  * The columns are stored in the range table entry. If this ResultRelInfo\n>  * doesn't have an entry in the range table (i.e. if it represents a\n>  * partition routing target), fetch the parent's RTE and map the columns\n>  * to the order they are in the partition.\n>  */\n> if (relinfo->ri_RangeTableIndex != 0)\n> {\n\nThis means that any one of the two is always usable here.  AFAICS,\nactually, ri_RangeTableIndex is non-zero for partitioned (=leaf) and\nnon-partitoned relations and ri_RootResultRelInfo is non-null for\npartitioned (parent or intermediate) relations (since they don't have\na coressponding range table entry).\n\nThe only cases where both are 0 and NULL are trigger-use, which is\nunrelated to the code path.This is a case where it would be worth an assertion.What do you think?regards,Ranier Vilela", "msg_date": "Fri, 12 Feb 2021 13:11:58 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible dereference after null check\n (src/backend/executor/ExecUtils.c)" }, { "msg_contents": "Em sex., 12 de fev. de 2021 às 13:11, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em sex., 12 de fev. de 2021 às 03:28, Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> escreveu:\n>\n>> At Wed, 10 Feb 2021 19:54:46 -0300, Ranier Vilela <ranier.vf@gmail.com>\n>> wrote in\n>> > Hi,\n>> >\n>> > Per Coverity.\n>> >\n>> > The functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.c\n>> > only are safe to call if the variable \"ri_RangeTableIndex\" is != 0.\n>> >\n>> > Otherwise a possible Dereference after null check (FORWARD_NULL) can be\n>> > raised.\n>>\n>> As it turns out, it's a false positive. And perhaps we don't want to\n>> take action just to satisfy the static code analyzer.\n>>\n>>\n>> The coment in ExecGetInsertedCols says:\n>>\n>> > /*\n>> > * The columns are stored in the range table entry. If this\n>> ResultRelInfo\n>> > * doesn't have an entry in the range table (i.e. if it represents a\n>> > * partition routing target), fetch the parent's RTE and map the columns\n>> > * to the order they are in the partition.\n>> > */\n>> > if (relinfo->ri_RangeTableIndex != 0)\n>> > {\n>>\n>> This means that any one of the two is always usable here. AFAICS,\n>> actually, ri_RangeTableIndex is non-zero for partitioned (=leaf) and\n>> non-partitoned relations and ri_RootResultRelInfo is non-null for\n>> partitioned (parent or intermediate) relations (since they don't have\n>> a coressponding range table entry).\n>>\n>> The only cases where both are 0 and NULL are trigger-use, which is\n>> unrelated to the code path.\n>>\n> This is a case where it would be worth an assertion.\n> What do you think?\n>\nApparently my efforts with Coverity have been worth it.\nAnd together we are helping to keep Postgres more secure.\nAlthough sometimes it is not recognized for that [1].\n\nregards,\nRanier Vilela\n\n[1]\nhttps://github.com/postgres/postgres/commit/54e51dcde03e5c746e8de6243c69fafdc8d0ec7a\n\nEm sex., 12 de fev. de 2021 às 13:11, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em sex., 12 de fev. de 2021 às 03:28, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Wed, 10 Feb 2021 19:54:46 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Per Coverity.\n> \n> The functions ExecGetInsertedCols and ExecGetUpdatedCols at ExecUtils.c\n> only are safe to call if the variable \"ri_RangeTableIndex\" is  != 0.\n> \n> Otherwise a possible Dereference after null check (FORWARD_NULL) can be\n> raised.\n\nAs it turns out, it's a false positive. And perhaps we don't want to\ntake action just to satisfy the static code analyzer.\n\n\nThe coment in ExecGetInsertedCols says:\n\n> /*\n>  * The columns are stored in the range table entry. If this ResultRelInfo\n>  * doesn't have an entry in the range table (i.e. if it represents a\n>  * partition routing target), fetch the parent's RTE and map the columns\n>  * to the order they are in the partition.\n>  */\n> if (relinfo->ri_RangeTableIndex != 0)\n> {\n\nThis means that any one of the two is always usable here.  AFAICS,\nactually, ri_RangeTableIndex is non-zero for partitioned (=leaf) and\nnon-partitoned relations and ri_RootResultRelInfo is non-null for\npartitioned (parent or intermediate) relations (since they don't have\na coressponding range table entry).\n\nThe only cases where both are 0 and NULL are trigger-use, which is\nunrelated to the code path.This is a case where it would be worth an assertion.What do you think?Apparently my efforts with Coverity have been worth it.And together we are helping to keep Postgres more secure.Although sometimes it is not recognized for that  [1].regards,Ranier Vilela[1] https://github.com/postgres/postgres/commit/54e51dcde03e5c746e8de6243c69fafdc8d0ec7a", "msg_date": "Mon, 15 Feb 2021 11:01:39 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible dereference after null check\n (src/backend/executor/ExecUtils.c)" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nIf xid is a subtransaction, the setup of base snapshot on the top-level\ntransaction,\ncan be not optional, otherwise a Dereference null return value\n(NULL_RETURNS) can be raised.\n\nPatch suggestion to fix this.\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c\nb/src/backend/replication/logical/reorderbuffer.c\nindex 5a62ab8bbc..3c6a81f716 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -2993,8 +2993,8 @@ ReorderBufferSetBaseSnapshot(ReorderBuffer *rb,\nTransactionId xid,\n */\n txn = ReorderBufferTXNByXid(rb, xid, true, &is_new, lsn, true);\n if (rbtxn_is_known_subxact(txn))\n- txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, false,\n- NULL, InvalidXLogRecPtr, false);\n+ txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, true,\n+ NULL, InvalidXLogRecPtr, true);\n Assert(txn->base_snapshot == NULL);\n\n txn->base_snapshot = snap;\n\nregards,\nRanier Vilela", "msg_date": "Wed, 10 Feb 2021 20:12:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "At Wed, 10 Feb 2021 20:12:38 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Per Coverity.\n> \n> If xid is a subtransaction, the setup of base snapshot on the top-level\n> transaction,\n> can be not optional, otherwise a Dereference null return value\n> (NULL_RETURNS) can be raised.\n> \n> Patch suggestion to fix this.\n> \n> diff --git a/src/backend/replication/logical/reorderbuffer.c\n> b/src/backend/replication/logical/reorderbuffer.c\n> index 5a62ab8bbc..3c6a81f716 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -2993,8 +2993,8 @@ ReorderBufferSetBaseSnapshot(ReorderBuffer *rb,\n> TransactionId xid,\n> */\n> txn = ReorderBufferTXNByXid(rb, xid, true, &is_new, lsn, true);\n> if (rbtxn_is_known_subxact(txn))\n> - txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, false,\n> - NULL, InvalidXLogRecPtr, false);\n> + txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, true,\n> + NULL, InvalidXLogRecPtr, true);\n> Assert(txn->base_snapshot == NULL);\n\nIf the return from the first call is a subtransaction, the second call\nalways obtain the top transaction. If the top transaction actualy did\nnot exist, it's rather the correct behavior to cause SEGV, than\ncreating a bogus rbtxn. THus it is wrong to set create=true and\ncreate_as_top=true. We could change the assertion like Assert (txn &&\ntxn->base_snapshot) to make things clearer.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 12 Feb 2021 15:56:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em sex., 12 de fev. de 2021 às 03:56, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Wed, 10 Feb 2021 20:12:38 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi,\n> >\n> > Per Coverity.\n> >\n> > If xid is a subtransaction, the setup of base snapshot on the top-level\n> > transaction,\n> > can be not optional, otherwise a Dereference null return value\n> > (NULL_RETURNS) can be raised.\n> >\n> > Patch suggestion to fix this.\n> >\n> > diff --git a/src/backend/replication/logical/reorderbuffer.c\n> > b/src/backend/replication/logical/reorderbuffer.c\n> > index 5a62ab8bbc..3c6a81f716 100644\n> > --- a/src/backend/replication/logical/reorderbuffer.c\n> > +++ b/src/backend/replication/logical/reorderbuffer.c\n> > @@ -2993,8 +2993,8 @@ ReorderBufferSetBaseSnapshot(ReorderBuffer *rb,\n> > TransactionId xid,\n> > */\n> > txn = ReorderBufferTXNByXid(rb, xid, true, &is_new, lsn, true);\n> > if (rbtxn_is_known_subxact(txn))\n> > - txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, false,\n> > - NULL, InvalidXLogRecPtr, false);\n> > + txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, true,\n> > + NULL, InvalidXLogRecPtr, true);\n> > Assert(txn->base_snapshot == NULL);\n>\n> If the return from the first call is a subtransaction, the second call\n> always obtain the top transaction. If the top transaction actualy did\n> not exist, it's rather the correct behavior to cause SEGV, than\n> creating a bogus rbtxn. THus it is wrong to set create=true and\n> create_as_top=true. We could change the assertion like Assert (txn &&\n> txn->base_snapshot) to make things clearer.\n>\nIt does not make sense to generate a SEGV on purpose and\nassertion would not solve why this happens in a production environment.\nBetter to report an error if the second call fails.\nWhat do you suggest as a description of the error?\n\nregards,\nRanier Vilela\n\nEm sex., 12 de fev. de 2021 às 03:56, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Wed, 10 Feb 2021 20:12:38 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Per Coverity.\n> \n> If xid is a subtransaction, the setup of base snapshot on the top-level\n> transaction,\n> can be not optional, otherwise a Dereference null return value\n> (NULL_RETURNS) can be raised.\n> \n> Patch suggestion to fix this.\n> \n> diff --git a/src/backend/replication/logical/reorderbuffer.c\n> b/src/backend/replication/logical/reorderbuffer.c\n> index 5a62ab8bbc..3c6a81f716 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -2993,8 +2993,8 @@ ReorderBufferSetBaseSnapshot(ReorderBuffer *rb,\n> TransactionId xid,\n>   */\n>   txn = ReorderBufferTXNByXid(rb, xid, true, &is_new, lsn, true);\n>   if (rbtxn_is_known_subxact(txn))\n> - txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, false,\n> - NULL, InvalidXLogRecPtr, false);\n> + txn = ReorderBufferTXNByXid(rb, txn->toplevel_xid, true,\n> + NULL, InvalidXLogRecPtr, true);\n>   Assert(txn->base_snapshot == NULL);\n\nIf the return from the first call is a subtransaction, the second call\nalways obtain the top transaction.  If the top transaction actualy did\nnot exist, it's rather the correct behavior to cause SEGV, than\ncreating a bogus rbtxn. THus it is wrong to set create=true and\ncreate_as_top=true.  We could change the assertion like Assert (txn &&\ntxn->base_snapshot) to make things clearer.It does not make sense to generate a SEGV on purpose and assertion would not solve why this happens in a production environment.Better to report an error if the second call fails.What do you suggest as a description of the error?regards,Ranier Vilela", "msg_date": "Fri, 12 Feb 2021 13:08:02 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "On Fri, Feb 12, 2021 at 03:56:02PM +0900, Kyotaro Horiguchi wrote:\n> If the return from the first call is a subtransaction, the second call\n> always obtain the top transaction. If the top transaction actualy did\n> not exist, it's rather the correct behavior to cause SEGV, than\n> creating a bogus rbtxn. THus it is wrong to set create=true and\n> create_as_top=true. We could change the assertion like Assert (txn &&\n> txn->base_snapshot) to make things clearer.\n\nI don't see much the point to change this code. The result would be\nthe same: a PANIC at this location.\n--\nMichael", "msg_date": "Sat, 13 Feb 2021 11:40:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Hi,\nHow about the following patch ?\n\nReorderBufferSetBaseSnapshot() can return a bool to indicate whether the\nbase snapshot is set up.\n\nFor the call by SnapBuildCommitTxn(), it seems xid is top transaction. So\nthe return value doesn't need to be checked.\n\nCheers\n\nOn Fri, Feb 12, 2021 at 6:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Feb 12, 2021 at 03:56:02PM +0900, Kyotaro Horiguchi wrote:\n> > If the return from the first call is a subtransaction, the second call\n> > always obtain the top transaction. If the top transaction actualy did\n> > not exist, it's rather the correct behavior to cause SEGV, than\n> > creating a bogus rbtxn. THus it is wrong to set create=true and\n> > create_as_top=true. We could change the assertion like Assert (txn &&\n> > txn->base_snapshot) to make things clearer.\n>\n> I don't see much the point to change this code. The result would be\n> the same: a PANIC at this location.\n> --\n> Michael\n>", "msg_date": "Fri, 12 Feb 2021 20:09:17 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em sáb., 13 de fev. de 2021 às 01:07, Zhihong Yu <zyu@yugabyte.com>\nescreveu:\n\n> Hi,\n> How about the following patch ?\n>\n> ReorderBufferSetBaseSnapshot() can return a bool to indicate whether the\n> base snapshot is set up.\n>\n> For the call by SnapBuildCommitTxn(), it seems xid is top transaction. So\n> the return value doesn't need to be checked.\n>\nIMO anything else is better than PANIC.\nAnyway, if all fails, reporting an error can contribute to checking where.\n\nAttached a patch suggestion v2.\n1. SnapBuildProcessChange returns a result of ReorderBufferSetBaseSnapshot,\nso the caller can act accordingly.\n2. SnapBuildCommitTxn can't ignore a result\nfrom ReorderBufferSetBaseSnapshot, even if it never fails.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 13 Feb 2021 17:35:05 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em sáb., 13 de fev. de 2021 às 17:35, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n>\n> Em sáb., 13 de fev. de 2021 às 01:07, Zhihong Yu <zyu@yugabyte.com>\n> escreveu:\n>\n>> Hi,\n>> How about the following patch ?\n>>\n>> ReorderBufferSetBaseSnapshot() can return a bool to indicate whether the\n>> base snapshot is set up.\n>>\n>> For the call by SnapBuildCommitTxn(), it seems xid is top transaction. So\n>> the return value doesn't need to be checked.\n>>\n> IMO anything else is better than PANIC.\n> Anyway, if all fails, reporting an error can contribute to checking where.\n>\n> Attached a patch suggestion v2.\n>\nSorry, I forgot to mention, it is based on a patch from Zhihong Yu.\n\nregards,\nRanier Vilela\n\nEm sáb., 13 de fev. de 2021 às 17:35, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em sáb., 13 de fev. de 2021 às 01:07, Zhihong Yu <zyu@yugabyte.com> escreveu:Hi,How about the following patch ?ReorderBufferSetBaseSnapshot() can return a bool to indicate whether the base snapshot is set up.For the call by SnapBuildCommitTxn(), it seems xid is top transaction. So the return value doesn't need to be checked.IMO anything else is better than PANIC.Anyway, if all fails, reporting an error can contribute to checking where.Attached a patch suggestion v2.Sorry, I forgot to mention, it is based on a patch from Zhihong Yu.regards,Ranier Vilela", "msg_date": "Sat, 13 Feb 2021 17:40:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Hi,\n+ (errmsg(\"BaseSnapshot cant't be setup at point %X/%X\",\n+ (uint32) (lsn >> 32), (uint32) lsn),\n+ errdetail(\"Top transaction is running.\")));\n\nDid you mean this errdetail:\n\nTop transaction is not running.\n\nCheers\n\nOn Sat, Feb 13, 2021 at 12:34 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n>\n> Em sáb., 13 de fev. de 2021 às 01:07, Zhihong Yu <zyu@yugabyte.com>\n> escreveu:\n>\n>> Hi,\n>> How about the following patch ?\n>>\n>> ReorderBufferSetBaseSnapshot() can return a bool to indicate whether the\n>> base snapshot is set up.\n>>\n>> For the call by SnapBuildCommitTxn(), it seems xid is top transaction. So\n>> the return value doesn't need to be checked.\n>>\n> IMO anything else is better than PANIC.\n> Anyway, if all fails, reporting an error can contribute to checking where.\n>\n> Attached a patch suggestion v2.\n> 1. SnapBuildProcessChange returns a result of\n> ReorderBufferSetBaseSnapshot, so the caller can act accordingly.\n> 2. SnapBuildCommitTxn can't ignore a result\n> from ReorderBufferSetBaseSnapshot, even if it never fails.\n>\n> regards,\n> Ranier Vilela\n>\n\nHi,+               (errmsg(\"BaseSnapshot cant't be setup at point %X/%X\",+                       (uint32) (lsn >> 32), (uint32) lsn),+                errdetail(\"Top transaction is running.\")));Did you mean this errdetail:Top transaction is not running.CheersOn Sat, Feb 13, 2021 at 12:34 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 13 de fev. de 2021 às 01:07, Zhihong Yu <zyu@yugabyte.com> escreveu:Hi,How about the following patch ?ReorderBufferSetBaseSnapshot() can return a bool to indicate whether the base snapshot is set up.For the call by SnapBuildCommitTxn(), it seems xid is top transaction. So the return value doesn't need to be checked.IMO anything else is better than PANIC.Anyway, if all fails, reporting an error can contribute to checking where.Attached a patch suggestion v2.1. SnapBuildProcessChange returns a result of ReorderBufferSetBaseSnapshot, so the caller can act accordingly.2. SnapBuildCommitTxn can't ignore a result from ReorderBufferSetBaseSnapshot, even if it never fails. \n\nregards,Ranier Vilela", "msg_date": "Sat, 13 Feb 2021 12:50:48 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em sáb., 13 de fev. de 2021 às 17:48, Zhihong Yu <zyu@yugabyte.com>\nescreveu:\n\n> Hi,\n> + (errmsg(\"BaseSnapshot cant't be setup at point %X/%X\",\n> + (uint32) (lsn >> 32), (uint32) lsn),\n> + errdetail(\"Top transaction is running.\")));\n>\n> Did you mean this errdetail:\n>\n> Top transaction is not running.\n>\nDone.\n\nThanks Zhihong.\nv3 based on your patch, attached.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 13 Feb 2021 17:59:32 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Hi,\nPatch v4 corrects a small typo:\n+ (errmsg(\"BaseSnapshot cant't be setup at point %X/%X\",\n\nCheers\n\nOn Sat, Feb 13, 2021 at 12:58 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em sáb., 13 de fev. de 2021 às 17:48, Zhihong Yu <zyu@yugabyte.com>\n> escreveu:\n>\n>> Hi,\n>> + (errmsg(\"BaseSnapshot cant't be setup at point %X/%X\",\n>> + (uint32) (lsn >> 32), (uint32) lsn),\n>> + errdetail(\"Top transaction is running.\")));\n>>\n>> Did you mean this errdetail:\n>>\n>> Top transaction is not running.\n>>\n> Done.\n>\n> Thanks Zhihong.\n> v3 based on your patch, attached.\n>\n> regards,\n> Ranier Vilela\n>", "msg_date": "Sat, 13 Feb 2021 13:12:25 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Possible dereference null return\n (src/backend/replication/logical/reorderbuffer.c)" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nThe use of type \"long\" is problematic with Windows 64bits.\nLong type on Windows 64bits is 32 bits.\n\nSee at:\nhttps://docs.microsoft.com/pt-br/cpp/cpp/data-type-ranges?view=msvc-160\n\n\n*long* 4 *long int*, *signed long int* -2.147.483.648 a 2.147.483.647\nTherefore long never be > INT_MAX at Windows 64 bits.\n\nThus lindex is always false in this expression:\nif (errno != 0 || badp == c || *badp != '\\0' || lindex > INT_MAX || lindex\n < INT_MIN)\n\nPatch suggestion to fix this.\n\ndiff --git a/src/backend/utils/adt/jsonfuncs.c\nb/src/backend/utils/adt/jsonfuncs.c\nindex 215a10f16e..54b0eded76 100644\n--- a/src/backend/utils/adt/jsonfuncs.c\n+++ b/src/backend/utils/adt/jsonfuncs.c\n@@ -1675,7 +1675,7 @@ push_path(JsonbParseState **st, int level, Datum\n*path_elems,\n * end, the access index must be normalized by level.\n */\n enum jbvType *tpath = palloc0((path_len - level) * sizeof(enum jbvType));\n- long lindex;\n+ int64 lindex;\n JsonbValue newkey;\n\n /*\n\nregards,\nRanier Vilela", "msg_date": "Wed, 10 Feb 2021 20:42:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Operands don't affect result (CONSTANT_EXPRESSION_RESULT)\n (src/backend/utils/adt/jsonfuncs.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> *long* 4 *long int*, *signed long int* -2.147.483.648 a 2.147.483.647\n> Therefore long never be > INT_MAX at Windows 64 bits.\n> Thus lindex is always false in this expression:\n> if (errno != 0 || badp == c || *badp != '\\0' || lindex > INT_MAX || lindex\n> < INT_MIN)\n\nWarnings about this are purest nanny-ism.\n\nAt the same time, I think this code could be improved; but the way\nto do that is to use strtoint(), rather than kluging the choice of\ndatatype even further.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Feb 2021 23:46:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Operands don't affect result (CONSTANT_EXPRESSION_RESULT)\n (src/backend/utils/adt/jsonfuncs.c)" }, { "msg_contents": "Em qui., 11 de fev. de 2021 às 01:46, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > *long* 4 *long int*, *signed long int* -2.147.483.648 a 2.147.483.647\n> > Therefore long never be > INT_MAX at Windows 64 bits.\n> > Thus lindex is always false in this expression:\n> > if (errno != 0 || badp == c || *badp != '\\0' || lindex > INT_MAX ||\n> lindex\n> > < INT_MIN)\n>\n> At the same time, I think this code could be improved; but the way\n> to do that is to use strtoint(), rather than kluging the choice of\n> datatype even further.\n>\nNo matter the function used strtol or strtoint, push_path will remain\nbroken with Windows 64bits.\nOr need to correct the expression.\nDefinitely using long is a bad idea.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de fev. de 2021 às 01:46, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> *long* 4 *long int*, *signed long int* -2.147.483.648 a 2.147.483.647\n> Therefore long never be > INT_MAX at Windows 64 bits.\n> Thus lindex is always false in this expression:\n> if (errno != 0 || badp == c || *badp != '\\0' || lindex > INT_MAX ||  lindex\n>  < INT_MIN)\nAt the same time, I think this code could be improved; but the way\nto do that is to use strtoint(), rather than kluging the choice of\ndatatype even further.No matter the function used strtol or strtoint, push_path will remain broken with Windows 64bits.Or need to correct the expression.Definitely using long is a bad idea.regards,Ranier Vilela", "msg_date": "Thu, 11 Feb 2021 10:08:09 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Operands don't affect result (CONSTANT_EXPRESSION_RESULT)\n (src/backend/utils/adt/jsonfuncs.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em qui., 11 de fev. de 2021 às 01:46, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> At the same time, I think this code could be improved; but the way\n>> to do that is to use strtoint(), rather than kluging the choice of\n>> datatype even further.\n\n> No matter the function used strtol or strtoint, push_path will remain\n> broken with Windows 64bits.\n\nThere is quite a lot of difference between \"broken\" and \"my compiler\ngenerates pointless warnings\". Still, I changed it to use strtoint(),\nbecause that's simpler and better style.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Feb 2021 12:51:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Operands don't affect result (CONSTANT_EXPRESSION_RESULT)\n (src/backend/utils/adt/jsonfuncs.c)" }, { "msg_contents": "Em qui., 11 de fev. de 2021 às 14:51, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em qui., 11 de fev. de 2021 às 01:46, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >> At the same time, I think this code could be improved; but the way\n> >> to do that is to use strtoint(), rather than kluging the choice of\n> >> datatype even further.\n>\n> > No matter the function used strtol or strtoint, push_path will remain\n> > broken with Windows 64bits.\n>\n> There is quite a lot of difference between \"broken\" and \"my compiler\n> generates pointless warnings\". Still, I changed it to use strtoint(),\n> because that's simpler and better style.\n>\nThanks Tom, for fixing this.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de fev. de 2021 às 14:51, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em qui., 11 de fev. de 2021 às 01:46, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> At the same time, I think this code could be improved; but the way\n>> to do that is to use strtoint(), rather than kluging the choice of\n>> datatype even further.\n\n> No matter the function used strtol or strtoint, push_path will remain\n> broken with Windows 64bits.\n\nThere is quite a lot of difference between \"broken\" and \"my compiler\ngenerates pointless warnings\".  Still, I changed it to use strtoint(),\nbecause that's simpler and better style.Thanks Tom, for fixing this.regards,Ranier Vilela", "msg_date": "Thu, 11 Feb 2021 18:56:13 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Operands don't affect result (CONSTANT_EXPRESSION_RESULT)\n (src/backend/utils/adt/jsonfuncs.c)" } ]
[ { "msg_contents": "As I mentioned in connection with adding the src/test/modules/test_regex\ntest code, I've been fooling with some performance improvements to our\nregular expression engine. Here's the first fruits of that labor.\nThis is mostly concerned with cutting the overhead for handling trivial\nunconstrained patterns like \".*\".\n\n0001 creates the concept of a \"rainbow\" arc within regex NFAs. You can\nread background info about this in the \"Colors and colormapping\" part of\nregex/README, but the basic point is that right now, representing a dot\n(\".\", match anything) within an NFA requires a separate arc for each\n\"color\" (character equivalence class) that the regex needs. This uses\nup a fair amount of storage and processing effort, especially in larger\nregexes which tend to have a lot of colors. We can replace such a\n\"rainbow\" of arcs with a single arc labeled with a special color\nRAINBOW. This is worth doing on its own account, just because it saves\nspace and time. For example, on the reg-33.15.1 test case in\ntest_regex.sql (a moderately large real-world RE), I find that HEAD\nrequires 1377614 bytes to represent the compiled RE, and the peak space\nusage during pg_regcomp() is 3124376 bytes. With this patch, that drops\nto 1077166 bytes for the RE (21% savings) with peak compilation space\n2800752 bytes (10% savings). Moreover, the runtime for that test case\ndrops from ~57ms to ~44ms, a 22% savings. (This is mostly measuring the\nRE compilation time. Execution time should drop a bit too since miss()\nneed consider fewer arcs; but that savings is in a cold code path so it\nwon't matter much.) These aren't earth-shattering numbers of course,\nbut for the amount of code needed, it seems well worth while.\n\nA possible point of contention is that I exposed the idea of a rainbow\narc in the regexport.h APIs, which will force consumers of that API\nto adapt --- see the changes to contrib/pg_trgm for an example. I'm\nnot too concerned about this because I kinda suspect that pg_trgm is\nthe only consumer of that API anywhere. (codesearch.debian.net knows\nof no others, anyway.) We could in principle hide the change by\nhaving the regexport functions expand a rainbow arc into one for\neach color, but that seems like make-work. pg_trgm would certainly\nnot see it as an improvement, and in general users of that API should\nappreciate recognizing rainbows as such, since they might be able to\napply optimizations that depend on doing so.\n\nWhich brings us to 0002, which is exactly such an optimization.\nThe idea here is to short-circuit character-by-character scanning\nwhen matching a sub-NFA that is like \".\" or \".*\" or variants of\nthat, ie it will match any sequence of some number of characters.\nThis requires the ability to recognize that a particular pair of\nNFA states are linked by a rainbow, so it's a lot less painful\nto do when rainbows are represented explicitly. The example that\ngot me interested in this is adapted from a Tcl trouble report:\n\nselect array_dims(regexp_matches(repeat('x',40) || '=' || repeat('y',50000),\n '^(.*)=(.*)$'));\n\nOn my machine, this takes about 6 seconds in HEAD, because there's an\nO(N^2) effect: we try to match the sub-NFA for the first \"(.*)\" capture\ngroup to each possible starting string, and only after expensively\nverifying that tautological match do we check to see if the next\ncharacter is \"=\". By not having to do any per-character work to decide\nthat .* matches a substring, the O(N^2) behavior is removed and the time\ndrops to about 7 msec.\n\n(One could also imagine fixing this by rearranging things to check for\nthe \"=\" match before verifying the capture-group matches. That's an\nidea I hope to look into in future, because it could help for cases\nwhere the variable parts are not merely \".*\". But I don't have clear\nideas about how to do that, and in any case \".*\" is common enough that\nthe present change should still be helpful.)\n\nThere are two non-boilerplate parts of the 0002 patch. One is the\ncheckmatchall() function that determines whether an NFA is match-all,\nand if so what the min and max match lengths are. This is actually not\nvery complicated once you understand what the regex engine does at the\n\"pre\" and \"post\" states. (See the \"Detailed semantics\" part of\nregex/README for some info about that, which I tried to clarify as part\nof the patch.) Other than those endpoint conditions it's just a\nrecursive graph search. The other hard part is the changes in\nrege_dfa.c to provide the actual short-circuit behavior at runtime.\nThat's ticklish because it's trying to emulate some overly complicated\nand underly documented code, particularly in longest() and shortest().\nI think that stuff is right; I've studied it and tested it. But it\ncould use more eyeballs.\n\nNotably, I had to add some more test cases to test_regex.sql to exercise\nthe short-circuit part of matchuntil() properly. That's only used for\nlookbehind constraints, so we won't hit the short-circuit path except\nwith something like '(?<=..)', which is maybe a tad silly.\n\nI'll add this to the upcoming commitfest.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 10 Feb 2021 23:39:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Some regular-expression performance hacking" }, { "msg_contents": "Hi Tom,\n\nOn Thu, Feb 11, 2021, at 05:39, Tom Lane wrote:\n>0001-invent-rainbow-arcs.patch\n>0002-recognize-matchall-NFAs.patch\n\nMany thanks for working on the regex engine,\nthis looks like an awesome optimization.\n\nTo test the correctness of the patches,\nI thought it would be nice with some real-life regexes,\nand just as important, some real-life text strings,\nto which the real-life regexes are applied to.\n\nI therefore patched Chromium's v8 regexes engine,\nto log the actual regexes that get compiled when\nvisiting websites, and also the text strings that\nare the regexes are applied to during run-time\nwhen the regexes are executed.\n\nI logged the regex and text strings as base64 encoded\nstrings to STDOUT, to make it easy to grep out the data,\nso it could be imported into PostgreSQL for analytics.\n\nIn total, I scraped the first-page of some ~50k websites,\nwhich produced 45M test rows to import,\nwhich when GROUP BY pattern and flags was reduced\ndown to 235k different regex patterns,\nand 1.5M different text string subjects.\n\nHere are some statistics on the different flags used:\n\nSELECT *, SUM(COUNT) OVER () FROM (SELECT flags, COUNT(*) FROM patterns GROUP BY flags) AS x ORDER BY COUNT DESC;\nflags | count | sum\n-------+--------+--------\n | 150097 | 235204\ni | 43537 | 235204\ng | 22029 | 235204\ngi | 15416 | 235204\ngm | 2411 | 235204\ngim | 602 | 235204\nm | 548 | 235204\nim | 230 | 235204\ny | 193 | 235204\ngy | 60 | 235204\ngiy | 29 | 235204\ngiu | 26 | 235204\nu | 11 | 235204\niy | 6 | 235204\ngu | 5 | 235204\ngimu | 2 | 235204\niu | 1 | 235204\nmy | 1 | 235204\n(18 rows)\n\nAs we can see, no flag at all is the most common, followed by the \"i\" flag.\n\nMost of the Javascript-regexes (97%) could be understood by PostgreSQL,\nonly 3% produced some kind of error, which is not unexpected,\nsince some Javascript-regex features like \\w and \\W have different\nsyntax in PostgreSQL:\n\nSELECT *, SUM(COUNT) OVER () FROM (SELECT is_match,error,COUNT(*) FROM subjects GROUP BY is_match,error) AS x ORDER BY count DESC;\nis_match | error | count | sum\n----------+---------------------------------------------------------------+--------+---------\nf | | 973987 | 1489489\nt | | 474225 | 1489489\n | invalid regular expression: invalid escape \\ sequence | 39141 | 1489489\n | invalid regular expression: invalid character range | 898 | 1489489\n | invalid regular expression: invalid backreference number | 816 | 1489489\n | invalid regular expression: brackets [] not balanced | 327 | 1489489\n | invalid regular expression: invalid repetition count(s) | 76 | 1489489\n | invalid regular expression: quantifier operand invalid | 17 | 1489489\n | invalid regular expression: parentheses () not balanced | 1 | 1489489\n | invalid regular expression: regular expression is too complex | 1 | 1489489\n(10 rows)\n\nHaving had some fun looking at statistics, let's move on to look at if there are any\nobservable differences between HEAD (8063d0f6f56e53edd991f53aadc8cb7f8d3fdd8f)\nand when these two patches have been applied.\n\nTo detect any differences,\nfor each (regex pattern, text string subject) pair,\nthe columns,\n is_match boolean\n captured text[]\n error text\nwere set by a PL/pgSQL function running HEAD:\n\n BEGIN\n _is_match := _subject ~ _pattern;\n _captured := regexp_match(_subject, _pattern);\n EXCEPTION WHEN OTHERS THEN\n UPDATE subjects SET\n error = SQLERRM\n WHERE subject_id = _subject_id;\n CONTINUE;\n END;\n UPDATE subjects SET\n is_match = _is_match,\n captured = _captured\n WHERE subject_id = _subject_id;\n\nThe patches\n\n 0001-invent-rainbow-arcs.patch\n 0002-recognize-matchall-NFAs.patch\n\nwere then applied and this query was executed to spot any differences:\n\nSELECT\n is_match <> (subject ~ pattern) AS is_match_diff,\n captured IS DISTINCT FROM regexp_match(subject, pattern) AS captured_diff,\n COUNT(*)\nFROM subjects\nWHERE error IS NULL\nAND (is_match <> (subject ~ pattern) OR captured IS DISTINCT FROM regexp_match(subject, pattern))\nGROUP BY 1,2\nORDER BY 3 DESC\n;\n\nThe query was first run on the unpatched HEAD to verify it detects no results.\n0 rows indeed, and it took this long to finish the query:\n\nTime: 186077.866 ms (03:06.078)\n\nRunning the same query with the two patches, was significantly faster:\n\nTime: 111785.735 ms (01:51.786)\n\nNo is_match differences were detected, good!\n\nHowever, there were 23 cases where what got captured differed:\n\n-[ RECORD 1 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (?:^v-([a-z0-9-]+))?(?:(?::|^@|^#)(\\[[^\\]]+\\]|[^\\.]+))?(.+)?$\nsubject | v-cloak\nis_match_head | t\ncaptured_head | {cloak,NULL,NULL}\nis_match_patch | t\ncaptured_patch | {NULL,NULL,v-cloak}\n-[ RECORD 2 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (?:^v-([a-z0-9-]+))?(?:(?::|^@|^#)(\\[[^\\]]+\\]|[^\\.]+))?(.+)?$\nsubject | v-if\nis_match_head | t\ncaptured_head | {if,NULL,NULL}\nis_match_patch | t\ncaptured_patch | {NULL,NULL,v-if}\n-[ RECORD 3 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?a5oc.com).*\nsubject | https://a5oc.com/attachments/6b184e79-6a7f-43e0-ac59-7ed9d0a8eb7e-jpeg.179582/\nis_match_head | t\ncaptured_head | {https://,a5oc.com,NULL <https://%2Ca5oc.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 4 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?allfordmustangs.com).*\nsubject | https://allfordmustangs.com/attachments/e463e329-0397-4e13-ad41-f30c6bc0659e-jpeg.779299/\nis_match_head | t\ncaptured_head | {https://,allfordmustangs.com,NULL <https://%2Callfordmustangs.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 5 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?audi-forums.com).*\nsubject | https://audi-forums.com/attachments/screenshot_20210207-151100_ebay-jpg.11506/\nis_match_head | t\ncaptured_head | {https://,audi-forums.com,NULL <https://%2Caudi-forums.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 6 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?can-amforum.com).*\nsubject | https://can-amforum.com/attachments/resized_20201214_163325-jpeg.101395/\nis_match_head | t\ncaptured_head | {https://,can-amforum.com,NULL <https://%2Ccan-amforum.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 7 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?contractortalk.com).*\nsubject | https://contractortalk.com/attachments/maryann-porch-roof-quote-12feb2021-jpg.508976/\nis_match_head | t\ncaptured_head | {https://,contractortalk.com,NULL <https://%2Ccontractortalk.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 8 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?halloweenforum.com).*\nsubject | https://halloweenforum.com/attachments/dead-fred-head-before-and-after-jpg.744080/\nis_match_head | t\ncaptured_head | {https://,halloweenforum.com,NULL <https://%2Challoweenforum.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 9 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?horseforum.com).*\nsubject | https://horseforum.com/attachments/dd90f089-9ae9-4521-98cd-27bda9ad38e9-jpeg.1109329/\nis_match_head | t\ncaptured_head | {https://,horseforum.com,NULL <https://%2Chorseforum.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 10 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?passatworld.com).*\nsubject | https://passatworld.com/attachments/clean-passat-jpg.102337/\nis_match_head | t\ncaptured_head | {https://,passatworld.com,NULL <https://%2Cpassatworld.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 11 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?plantedtank.net).*\nsubject | https://plantedtank.net/attachments/brendon-60p-jpg.1026075/\nis_match_head | t\ncaptured_head | {https://,plantedtank.net,NULL <https://%2Cplantedtank.net%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 12 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?vauxhallownersnetwork.co.uk).*\nsubject | https://vauxhallownersnetwork.co.uk/attachments/opelnavi-jpg.96639/\nis_match_head | t\ncaptured_head | {https://,vauxhallownersnetwork.co.uk,NULL <https://%2Cvauxhallownersnetwork.co.uk%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 13 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?volvov40club.com).*\nsubject | https://volvov40club.com/attachments/img_20210204_164157-jpg.17356/\nis_match_head | t\ncaptured_head | {https://,volvov40club.com,NULL <https://%2Cvolvov40club.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 14 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?vwidtalk.com).*\nsubject | https://vwidtalk.com/attachments/1613139846689-png.1469/\nis_match_head | t\ncaptured_head | {https://,vwidtalk.com,NULL <https://%2Cvwidtalk.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 15 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^.*://)?((www.)?yellowbullet.com).*\nsubject | https://yellowbullet.com/attachments/20210211_133934-jpg.204604/\nis_match_head | t\ncaptured_head | {https://,yellowbullet.com,NULL <https://%2Cyellowbullet.com%2Cnull/>}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 16 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^[^\\?]*)?(\\?[^#]*)?(#.*$)?\nsubject | https://www.disneyonice.com/oneIdResponder.html\nis_match_head | t\ncaptured_head | {https://www.disneyonice.com/oneIdResponder.html,NULL,NULL}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 17 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^[a-zA-Z0-9\\/_-]+)*(\\.[a-zA-Z]+)?\nsubject | /\nis_match_head | t\ncaptured_head | {/,NULL}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 18 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^[a-zA-Z0-9\\/_-]+)*(\\.[a-zA-Z]+)?\nsubject | /en.html\nis_match_head | t\ncaptured_head | {/en,.html}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 19 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | (^https?:\\/\\/)?(((\\[[^\\]]+\\])|([^:\\/\\?#]+))(:(\\d+))?)?([^\\?#]*)(.*)?\nsubject | https://e.echatsoft.com/mychat/visitor\nis_match_head | t\ncaptured_head | {https://,e.echatsoft.com,e.echatsoft.com,NULL,e.echatsoft.com,NULL,NULL,/mychat/visitor <https://%2Ce.echatsoft.com%2Ce.echatsoft.com%2Cnull%2Ce.echatsoft.com%2Cnull%2Cnull%2C/mychat/visitor>,\"\"}\nis_match_patch | t\ncaptured_patch | {NULL,https,https,NULL,https,NULL,NULL,://e.echatsoft.com/mychat/visitor,\"\"}\n-[ RECORD 20 ]-+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------\npattern | (^|.)41nbc.com$|(^|.)41nbc.dev$|(^|.)52.23.179.12$|(^|.)52.3.245.221$|(^|.)clipsyndicate.com$|(^|.)michaelbgiordano.com$|(^|.)syndicaster.tv$|(^|.)wdef.com$|(^|.)wdef.dev$|(^|.)wxxv.mysiteserver.net$|(^|.)wxxv25.dev$|(^|.)clipsyndicate.com$|(^|.)syndicaster.tv$\nsubject | wdef.com\nis_match_head | t\ncaptured_head | {NULL,NULL,NULL,NULL,NULL,NULL,NULL,\"\",NULL,NULL,NULL,NULL,NULL}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 21 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | ^((^\\w+:|^)\\/\\/)?(?:www\\.)?\nsubject | https://www.deputy.com/\nis_match_head | t\ncaptured_head | {https://,https <https://%2Chttps/>:}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 22 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | ^((^\\w+:|^)\\/\\/)?(?:www\\.)?\nsubject | https://www.westernsydney.edu.au/\nis_match_head | t\ncaptured_head | {https://,https <https://%2Chttps/>:}\nis_match_patch | t\ncaptured_patch |\n-[ RECORD 23 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\npattern | ^(https?:){0,1}\\/\\/|\nsubject | https://ui.powerreviews.com/api/\nis_match_head | t\ncaptured_head | {https:}\nis_match_patch | t\ncaptured_patch | {NULL}\n\nThe code to reproduce the results have been pushed here:\nhttps://github.com/truthly/regexes-in-the-wild\n\nLet me know if you want access to the dataset,\nI could open up a port to my PostgreSQL so you could take a dump.\n\nSELECT\n pg_size_pretty(pg_relation_size('patterns')) AS patterns,\n pg_size_pretty(pg_relation_size('subjects')) AS subjects;\npatterns | subjects\n----------+----------\n20 MB | 568 MB\n(1 row)\n\n/Joel\nHi Tom,On Thu, Feb 11, 2021, at 05:39, Tom Lane wrote:>0001-invent-rainbow-arcs.patch>0002-recognize-matchall-NFAs.patchMany thanks for working on the regex engine,this looks like an awesome optimization.To test the correctness of the patches,I thought it would be nice with some real-life regexes,and just as important, some real-life text strings,to which the real-life regexes are applied to.I therefore patched Chromium's v8 regexes engine,to log the actual regexes that get compiled whenvisiting websites, and also the text strings thatare the regexes are applied to during run-timewhen the regexes are executed.I logged the regex and text strings as base64 encodedstrings to STDOUT, to make it easy to grep out the data,so it could be imported into PostgreSQL for analytics.In total, I scraped the first-page of some ~50k websites,which produced 45M test rows to import,which when GROUP BY pattern and flags was reduceddown to 235k different regex patterns,and 1.5M different text string subjects.Here are some statistics on the different flags used:SELECT *, SUM(COUNT) OVER () FROM (SELECT flags, COUNT(*) FROM patterns GROUP BY flags) AS x ORDER BY COUNT DESC;flags | count  |  sum-------+--------+--------       | 150097 | 235204i     |  43537 | 235204g     |  22029 | 235204gi    |  15416 | 235204gm    |   2411 | 235204gim   |    602 | 235204m     |    548 | 235204im    |    230 | 235204y     |    193 | 235204gy    |     60 | 235204giy   |     29 | 235204giu   |     26 | 235204u     |     11 | 235204iy    |      6 | 235204gu    |      5 | 235204gimu  |      2 | 235204iu    |      1 | 235204my    |      1 | 235204(18 rows)As we can see, no flag at all is the most common, followed by the \"i\" flag.Most of the Javascript-regexes (97%) could be understood by PostgreSQL,only 3% produced some kind of error, which is not unexpected,since some Javascript-regex features like \\w and \\W have differentsyntax in PostgreSQL:SELECT *, SUM(COUNT) OVER () FROM (SELECT is_match,error,COUNT(*) FROM subjects GROUP BY is_match,error) AS x ORDER BY count DESC;is_match |                             error                             | count  |   sum----------+---------------------------------------------------------------+--------+---------f        |                                                               | 973987 | 1489489t        |                                                               | 474225 | 1489489          | invalid regular expression: invalid escape \\ sequence         |  39141 | 1489489          | invalid regular expression: invalid character range           |    898 | 1489489          | invalid regular expression: invalid backreference number      |    816 | 1489489          | invalid regular expression: brackets [] not balanced          |    327 | 1489489          | invalid regular expression: invalid repetition count(s)       |     76 | 1489489          | invalid regular expression: quantifier operand invalid        |     17 | 1489489          | invalid regular expression: parentheses () not balanced       |      1 | 1489489          | invalid regular expression: regular expression is too complex |      1 | 1489489(10 rows)Having had some fun looking at statistics, let's move on to look at if there are anyobservable differences between HEAD (8063d0f6f56e53edd991f53aadc8cb7f8d3fdd8f)and when these two patches have been applied.To detect any differences,for each (regex pattern, text string subject) pair,the columns,  is_match boolean  captured text[]  error textwere set by a PL/pgSQL function running HEAD:  BEGIN    _is_match := _subject ~ _pattern;    _captured := regexp_match(_subject, _pattern);  EXCEPTION WHEN OTHERS THEN    UPDATE subjects SET      error = SQLERRM    WHERE subject_id = _subject_id;    CONTINUE;  END;  UPDATE subjects SET    is_match = _is_match,    captured = _captured  WHERE subject_id = _subject_id;The patches  0001-invent-rainbow-arcs.patch  0002-recognize-matchall-NFAs.patchwere then applied and this query was executed to spot any differences:SELECT  is_match <> (subject ~ pattern) AS is_match_diff,  captured IS DISTINCT FROM regexp_match(subject, pattern) AS captured_diff,  COUNT(*)FROM subjectsWHERE error IS NULLAND (is_match <> (subject ~ pattern) OR captured IS DISTINCT FROM regexp_match(subject, pattern))GROUP BY 1,2ORDER BY 3 DESC;The query was first run on the unpatched HEAD to verify it detects no results.0 rows indeed, and it took this long to finish the query:Time: 186077.866 ms (03:06.078)Running the same query with the two patches, was significantly faster:Time: 111785.735 ms (01:51.786)No is_match differences were detected, good!However, there were 23 cases where what got captured differed:-[ RECORD 1 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (?:^v-([a-z0-9-]+))?(?:(?::|^@|^#)(\\[[^\\]]+\\]|[^\\.]+))?(.+)?$subject        | v-cloakis_match_head  | tcaptured_head  | {cloak,NULL,NULL}is_match_patch | tcaptured_patch | {NULL,NULL,v-cloak}-[ RECORD 2 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (?:^v-([a-z0-9-]+))?(?:(?::|^@|^#)(\\[[^\\]]+\\]|[^\\.]+))?(.+)?$subject        | v-ifis_match_head  | tcaptured_head  | {if,NULL,NULL}is_match_patch | tcaptured_patch | {NULL,NULL,v-if}-[ RECORD 3 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?a5oc.com).*subject        | https://a5oc.com/attachments/6b184e79-6a7f-43e0-ac59-7ed9d0a8eb7e-jpeg.179582/is_match_head  | tcaptured_head  | {https://,a5oc.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 4 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?allfordmustangs.com).*subject        | https://allfordmustangs.com/attachments/e463e329-0397-4e13-ad41-f30c6bc0659e-jpeg.779299/is_match_head  | tcaptured_head  | {https://,allfordmustangs.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 5 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?audi-forums.com).*subject        | https://audi-forums.com/attachments/screenshot_20210207-151100_ebay-jpg.11506/is_match_head  | tcaptured_head  | {https://,audi-forums.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 6 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?can-amforum.com).*subject        | https://can-amforum.com/attachments/resized_20201214_163325-jpeg.101395/is_match_head  | tcaptured_head  | {https://,can-amforum.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 7 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?contractortalk.com).*subject        | https://contractortalk.com/attachments/maryann-porch-roof-quote-12feb2021-jpg.508976/is_match_head  | tcaptured_head  | {https://,contractortalk.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 8 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?halloweenforum.com).*subject        | https://halloweenforum.com/attachments/dead-fred-head-before-and-after-jpg.744080/is_match_head  | tcaptured_head  | {https://,halloweenforum.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 9 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?horseforum.com).*subject        | https://horseforum.com/attachments/dd90f089-9ae9-4521-98cd-27bda9ad38e9-jpeg.1109329/is_match_head  | tcaptured_head  | {https://,horseforum.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 10 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?passatworld.com).*subject        | https://passatworld.com/attachments/clean-passat-jpg.102337/is_match_head  | tcaptured_head  | {https://,passatworld.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 11 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?plantedtank.net).*subject        | https://plantedtank.net/attachments/brendon-60p-jpg.1026075/is_match_head  | tcaptured_head  | {https://,plantedtank.net,NULL}is_match_patch | tcaptured_patch |-[ RECORD 12 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?vauxhallownersnetwork.co.uk).*subject        | https://vauxhallownersnetwork.co.uk/attachments/opelnavi-jpg.96639/is_match_head  | tcaptured_head  | {https://,vauxhallownersnetwork.co.uk,NULL}is_match_patch | tcaptured_patch |-[ RECORD 13 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?volvov40club.com).*subject        | https://volvov40club.com/attachments/img_20210204_164157-jpg.17356/is_match_head  | tcaptured_head  | {https://,volvov40club.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 14 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?vwidtalk.com).*subject        | https://vwidtalk.com/attachments/1613139846689-png.1469/is_match_head  | tcaptured_head  | {https://,vwidtalk.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 15 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^.*://)?((www.)?yellowbullet.com).*subject        | https://yellowbullet.com/attachments/20210211_133934-jpg.204604/is_match_head  | tcaptured_head  | {https://,yellowbullet.com,NULL}is_match_patch | tcaptured_patch |-[ RECORD 16 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^[^\\?]*)?(\\?[^#]*)?(#.*$)?subject        | https://www.disneyonice.com/oneIdResponder.htmlis_match_head  | tcaptured_head  | {https://www.disneyonice.com/oneIdResponder.html,NULL,NULL}is_match_patch | tcaptured_patch |-[ RECORD 17 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^[a-zA-Z0-9\\/_-]+)*(\\.[a-zA-Z]+)?subject        | /is_match_head  | tcaptured_head  | {/,NULL}is_match_patch | tcaptured_patch |-[ RECORD 18 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^[a-zA-Z0-9\\/_-]+)*(\\.[a-zA-Z]+)?subject        | /en.htmlis_match_head  | tcaptured_head  | {/en,.html}is_match_patch | tcaptured_patch |-[ RECORD 19 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^https?:\\/\\/)?(((\\[[^\\]]+\\])|([^:\\/\\?#]+))(:(\\d+))?)?([^\\?#]*)(.*)?subject        | https://e.echatsoft.com/mychat/visitoris_match_head  | tcaptured_head  | {https://,e.echatsoft.com,e.echatsoft.com,NULL,e.echatsoft.com,NULL,NULL,/mychat/visitor,\"\"}is_match_patch | tcaptured_patch | {NULL,https,https,NULL,https,NULL,NULL,://e.echatsoft.com/mychat/visitor,\"\"}-[ RECORD 20 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | (^|.)41nbc.com$|(^|.)41nbc.dev$|(^|.)52.23.179.12$|(^|.)52.3.245.221$|(^|.)clipsyndicate.com$|(^|.)michaelbgiordano.com$|(^|.)syndicaster.tv$|(^|.)wdef.com$|(^|.)wdef.dev$|(^|.)wxxv.mysiteserver.net$|(^|.)wxxv25.dev$|(^|.)clipsyndicate.com$|(^|.)syndicaster.tv$subject        | wdef.comis_match_head  | tcaptured_head  | {NULL,NULL,NULL,NULL,NULL,NULL,NULL,\"\",NULL,NULL,NULL,NULL,NULL}is_match_patch | tcaptured_patch |-[ RECORD 21 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | ^((^\\w+:|^)\\/\\/)?(?:www\\.)?subject        | https://www.deputy.com/is_match_head  | tcaptured_head  | {https://,https:}is_match_patch | tcaptured_patch |-[ RECORD 22 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | ^((^\\w+:|^)\\/\\/)?(?:www\\.)?subject        | https://www.westernsydney.edu.au/is_match_head  | tcaptured_head  | {https://,https:}is_match_patch | tcaptured_patch |-[ RECORD 23 ]-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------pattern        | ^(https?:){0,1}\\/\\/|subject        | https://ui.powerreviews.com/api/is_match_head  | tcaptured_head  | {https:}is_match_patch | tcaptured_patch | {NULL}The code to reproduce the results have been pushed here:https://github.com/truthly/regexes-in-the-wildLet me know if you want access to the dataset,I could open up a port to my PostgreSQL so you could take a dump.SELECT    pg_size_pretty(pg_relation_size('patterns')) AS patterns,    pg_size_pretty(pg_relation_size('subjects')) AS subjects;patterns | subjects----------+----------20 MB    | 568 MB(1 row)/Joel", "msg_date": "Sat, 13 Feb 2021 18:19:34 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> In total, I scraped the first-page of some ~50k websites,\n> which produced 45M test rows to import,\n> which when GROUP BY pattern and flags was reduced\n> down to 235k different regex patterns,\n> and 1.5M different text string subjects.\n\nThis seems like an incredibly useful test dataset.\nI'd definitely like a copy.\n\n> No is_match differences were detected, good!\n\nCool ...\n\n> However, there were 23 cases where what got captured differed:\n\nI shall take a closer look at that.\n\nMany thanks for doing this work!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Feb 2021 12:35:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> No is_match differences were detected, good!\n> However, there were 23 cases where what got captured differed:\n\nThese all stem from the same oversight: checkmatchall() was being\ntoo cavalier by ignoring \"pseudocolor\" arcs, which are arcs that\nmatch start-of-string or end-of-string markers. I'd supposed that\npseudocolor arcs necessarily match parallel RAINBOW arcs, because\nthey start out that way (cf. newnfa). But it turns out that\nsome edge-of-string constraints can be optimized in such a way that\nthey only appear in the final NFA in the guise of missing or extra\npseudocolor arcs. We have to actually check that the pseudocolor arcs\nmatch the RAINBOW arcs, otherwise our \"matchall\" NFA isn't one because\nit acts differently at the start or end of the string than it does\nelsewhere.\n\nSo here's a revised pair of patches (0001 is actually the same as\nbefore).\n\nThanks again for testing!\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 13 Feb 2021 16:11:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Sat, Feb 13, 2021, at 22:11, Tom Lane wrote:\n>0001-invent-rainbow-arcs-2.patch\n>0002-recognize-matchall-NFAs-2.patch\n\nI've successfully tested both patches against the 1.5M regexes-in-the-wild dataset.\n\nOut of the 1489489 (pattern, text string) pairs tested,\nthere was only one single deviation:\n\nThis 100577 bytes big regex (pattern_id = 207811)...\n\n\\.(ac|com\\.ac|edu\\.ac|gov\\.ac|net\\.ac|mil\\.ac| ... |wmflabs\\.org|yolasite\\.com|za\\.net|za\\.org)$\n\n...previously raised...\n\n error invalid regular expression: regular expression is too complex\n\n...but now goes through:\n\nis_match <NULL> => t\ncaptured <NULL> => {de}\nerror invalid regular expression: regular expression is too complex => <NULL>\n\nNice. The patched regex engine is apparently capable of handling even more complex regexes than before.\n\nThe test that found the deviation tests each (pattern, text string) individually,\nto catch errors. But I've also made a separate query to just test regexes\nknown to not cause errors, to allow testing all regexes in one big query,\nwhich fully utilizes the CPU cores and also runs quicker.\n\nBelow is the result of the performance test query:\n\n\\timing\n\nSELECT\n tests.is_match IS NOT DISTINCT FROM (subjects.subject ~ patterns.pattern),\n tests.captured IS NOT DISTINCT FROM regexp_match(subjects.subject, patterns.pattern),\n COUNT(*)\nFROM tests\nJOIN subjects ON subjects.subject_id = tests.subject_id\nJOIN patterns ON patterns.pattern_id = subjects.pattern_id\nJOIN server_versions ON server_versions.server_version_num = tests.server_version_num\nWHERE server_versions.server_version = current_setting('server_version')\nAND tests.error IS NULL\nGROUP BY 1,2\nORDER BY 1,2;\n\n-- 8facf1ea00b7a0c08c755a0392212b83e04ae28a:\n\n?column? | ?column? | count\n----------+----------+---------\nt | t | 1448212\n(1 row)\n\nTime: 592196.145 ms (09:52.196)\n\n-- 8facf1ea00b7a0c08c755a0392212b83e04ae28a+patches:\n\n?column? | ?column? | count\n----------+----------+---------\nt | t | 1448212\n(1 row)\n\nTime: 461739.364 ms (07:41.739)\n\nThat's an impressive 22% speed-up!\n\n/Joel\nOn Sat, Feb 13, 2021, at 22:11, Tom Lane wrote:>0001-invent-rainbow-arcs-2.patch>0002-recognize-matchall-NFAs-2.patchI've successfully tested both patches against the 1.5M regexes-in-the-wild dataset.Out of the 1489489 (pattern, text string) pairs tested,there was only one single deviation:This 100577 bytes big regex (pattern_id = 207811)...\\.(ac|com\\.ac|edu\\.ac|gov\\.ac|net\\.ac|mil\\.ac| ... |wmflabs\\.org|yolasite\\.com|za\\.net|za\\.org)$...previously raised...    error invalid regular expression: regular expression is too complex...but now goes through:is_match <NULL> => tcaptured <NULL> => {de}error invalid regular expression: regular expression is too complex => <NULL>Nice. The patched regex engine is apparently capable of handling even more complex regexes than before.The test that found the deviation tests each (pattern, text string) individually,to catch errors. But I've also made a separate query to just test regexesknown to not cause errors, to allow testing all regexes in one big query,which fully utilizes the CPU cores and also runs quicker.Below is the result of the performance test query:\\timingSELECT  tests.is_match IS NOT DISTINCT FROM (subjects.subject ~ patterns.pattern),  tests.captured IS NOT DISTINCT FROM regexp_match(subjects.subject, patterns.pattern),  COUNT(*)FROM testsJOIN subjects ON subjects.subject_id = tests.subject_idJOIN patterns ON patterns.pattern_id = subjects.pattern_idJOIN server_versions ON server_versions.server_version_num = tests.server_version_numWHERE server_versions.server_version = current_setting('server_version')AND tests.error IS NULLGROUP BY 1,2ORDER BY 1,2;-- 8facf1ea00b7a0c08c755a0392212b83e04ae28a:?column? | ?column? |  count----------+----------+---------t        | t        | 1448212(1 row)Time: 592196.145 ms (09:52.196)-- 8facf1ea00b7a0c08c755a0392212b83e04ae28a+patches:?column? | ?column? |  count----------+----------+---------t        | t        | 1448212(1 row)Time: 461739.364 ms (07:41.739)That's an impressive 22% speed-up!/Joel", "msg_date": "Sun, 14 Feb 2021 13:52:55 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> I've successfully tested both patches against the 1.5M regexes-in-the-wild dataset.\n> Out of the 1489489 (pattern, text string) pairs tested,\n> there was only one single deviation:\n> This 100577 bytes big regex (pattern_id = 207811)...\n> ...\n> ...previously raised...\n> error invalid regular expression: regular expression is too complex\n> ...but now goes through:\n\n> Nice. The patched regex engine is apparently capable of handling even more complex regexes than before.\n\nYeah. There are various limitations that can lead to REG_ETOOBIG, but the\nmain ones are \"too many states\" and \"too many arcs\". The RAINBOW change\ndirectly reduces the number of arcs and thus makes larger regexes feasible.\nI'm sure it's coincidental that the one such example you captured happens\nto be fixed by this change, but hey I'll take it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Feb 2021 11:45:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Below is the result of the performance test query:\n> -- 8facf1ea00b7a0c08c755a0392212b83e04ae28a:\n> Time: 592196.145 ms (09:52.196)\n> -- 8facf1ea00b7a0c08c755a0392212b83e04ae28a+patches:\n> Time: 461739.364 ms (07:41.739)\n> That's an impressive 22% speed-up!\n\nI've been doing some more hacking over the weekend, and have a couple\nof additional improvements to show. The point of these two additional\npatches is to reduce the number of \"struct subre\" sub-regexps that\nthe regex parser creates. The subre's themselves aren't that large,\nso this might seem like it would have only small benefit. However,\neach subre requires its own NFA for the portion of the RE that it\nmatches. That adds space, and it also adds compilation time because\nwe run the \"optimize()\" pass separately for each such NFA. Maybe\nthere'd be a way to share some of that work, but I'm not very clear\nhow. In any case, not having a subre at all is clearly better where\nwe can manage it.\n\n0003 is a small patch that fixes up parseqatom() so that it doesn't\nemit no-op subre's for empty portions of a regexp branch that are\nadjacent to a \"messy\" regexp atom (that is, a capture node, a\nbackref, or an atom with greediness different from what preceded it).\n\n0004 is a rather larger patch whose result is to get rid of extra\nsubre's associated with alternation subre's. If we have a|b|c\nand any of those alternation branches are messy, we end up with\n\n\t *\n\t / \\\n\ta *\n\t / \\\n\t b *\n\t / \\\n\t c NULL\n\nwhere each \"*\" is an alternation subre node, and all those \"*\"'s have\nidentical NFAs that match the whole a|b|c construct. This means that\nfor an N-way alternation we're going to need something like O(N^2)\nwork to optimize all those NFAs. That's embarrassing (and I think\nit's my fault --- if memory serves, I put in this representation\nof messy alternations years ago).\n\nWe can improve matters by having just one parent node for an\nalternation:\n\n\t*\n\t \\\n\t a -> b -> c\n\nThat requires replacing the binary-tree structure of subre's\nwith a child-and-sibling arrangement, which is not terribly\ndifficult but accounts for most of the bulk of the patch.\n(I'd wanted to do that for years, but up till now I did not\nthink it would have any real material benefit.)\n\nThere might be more that can be done in this line, but that's\nas far as I got so far.\n\nI did some testing on this using your dataset (thanks for\ngiving me a copy) and this query:\n\nSELECT\n pattern,\n subject,\n is_match AS is_match_head,\n captured AS captured_head,\n subject ~ pattern AS is_match_patch,\n regexp_match(subject, pattern) AS captured_patch\nFROM subjects\nWHERE error IS NULL\nAND (is_match <> (subject ~ pattern)\n OR captured IS DISTINCT FROM regexp_match(subject, pattern));\n\nI got these runtimes (non-cassert builds):\n\nHEAD\t313661.149 ms (05:13.661)\n+0001\t297397.293 ms (04:57.397)\t5% better than HEAD\n+0002\t151995.803 ms (02:31.996)\t51% better than HEAD\n+0003\t139843.934 ms (02:19.844)\t55% better than HEAD\n+0004\t95034.611 ms (01:35.035)\t69% better than HEAD\n\nSince I don't have all the tables used in your query, I can't\ntry to reproduce your results exactly. I suspect the reason\nI'm getting a better percentage improvement than you did is\nthat the joining/grouping/ordering involved in your query\ncreates a higher baseline query cost.\n\nAnyway, I'm feeling pretty pleased with these results ...\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 14 Feb 2021 22:11:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Mon, Feb 15, 2021, at 04:11, Tom Lane wrote:\n>I got these runtimes (non-cassert builds):\n>\n>HEAD 313661.149 ms (05:13.661)\n>+0001 297397.293 ms (04:57.397) 5% better than HEAD\n>+0002 151995.803 ms (02:31.996) 51% better than HEAD\n>+0003 139843.934 ms (02:19.844) 55% better than HEAD\n>+0004 95034.611 ms (01:35.035) 69% better than HEAD\n>\n>Since I don't have all the tables used in your query, I can't\n>try to reproduce your results exactly. I suspect the reason\n>I'm getting a better percentage improvement than you did is\n>that the joining/grouping/ordering involved in your query\n>creates a higher baseline query cost.\n\nMind blowing speed-up, wow!\n\nI've tested all 4 patches successfully.\n\nTo eliminate the baseline cost of the join,\nI first created this table:\n\nCREATE TABLE performance_test AS\nSELECT\n subjects.subject,\n patterns.pattern,\n tests.is_match,\n tests.captured\nFROM tests\nJOIN subjects ON subjects.subject_id = tests.subject_id\nJOIN patterns ON patterns.pattern_id = subjects.pattern_id\nJOIN server_versions ON server_versions.server_version_num = tests.server_version_num\nWHERE server_versions.server_version = current_setting('server_version')\nAND tests.error IS NULL\n;\n\nThen I ran this query:\n\n\\timing\n\nSELECT\n is_match <> (subject ~ pattern),\n captured IS DISTINCT FROM regexp_match(subject, pattern),\n COUNT(*)\nFROM performance_test\nGROUP BY 1,2\nORDER BY 1,2\n;\n\nAll patches gave the same result:\n\n?column? | ?column? | count\n----------+----------+---------\nf | f | 1448212\n(1 row)\n\nI.e., no detected semantic differences.\n\nTiming differences:\n\nHEAD 570632.722 ms (09:30.633)\n+0001 472938.857 ms (07:52.939) 17% better than HEAD\n+0002 451638.049 ms (07:31.638) 20% better than HEAD\n+0003 439377.813 ms (07:19.378) 23% better than HEAD\n+0004 96447.038 ms (01:36.447) 83% better than HEAD\n\nI tested on my MacBook Pro 2.4GHz 8-Core Intel Core i9, 32 GB 2400 MHz DDR4 running macOS Big Sur 11.1:\n\nSELECT version();\n version\n----------------------------------------------------------------------------------------------------------------------\nPostgreSQL 14devel on x86_64-apple-darwin20.2.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit\n(1 row)\n\nMy HEAD = 46d6e5f567906389c31c4fb3a2653da1885c18ee.\n\nPostgreSQL was compiled with just ./configure, no parameters, and the only non-default postgresql.conf settings were these:\nlog_destination = 'csvlog'\nlogging_collector = on\nlog_filename = 'postgresql.log'\n\nAmazing work!\n\nI hope to have a new dataset ready soon with regex flags for applied subjects as well.\n\n/Joel\n\nOn Mon, Feb 15, 2021, at 04:11, Tom Lane wrote:>I got these runtimes (non-cassert builds):>>HEAD\t313661.149 ms (05:13.661)>+0001\t297397.293 ms (04:57.397)\t5% better than HEAD>+0002\t151995.803 ms (02:31.996)\t51% better than HEAD>+0003\t139843.934 ms (02:19.844)\t55% better than HEAD>+0004\t95034.611 ms (01:35.035)\t69% better than HEAD>>Since I don't have all the tables used in your query, I can't>try to reproduce your results exactly.  I suspect the reason>I'm getting a better percentage improvement than you did is>that the joining/grouping/ordering involved in your query>creates a higher baseline query cost.Mind blowing speed-up, wow!I've tested all 4 patches successfully.To eliminate the baseline cost of the join,I first created this table:CREATE TABLE performance_test ASSELECT  subjects.subject,  patterns.pattern,  tests.is_match,  tests.capturedFROM testsJOIN subjects ON subjects.subject_id = tests.subject_idJOIN patterns ON patterns.pattern_id = subjects.pattern_idJOIN server_versions ON server_versions.server_version_num = tests.server_version_numWHERE server_versions.server_version = current_setting('server_version')AND tests.error IS NULL;Then I ran this query:\\timingSELECT  is_match <> (subject ~ pattern),  captured IS DISTINCT FROM regexp_match(subject, pattern),  COUNT(*)FROM performance_testGROUP BY 1,2ORDER BY 1,2;All patches gave the same result:?column? | ?column? |  count----------+----------+---------f        | f        | 1448212(1 row)I.e., no detected semantic differences.Timing differences:HEAD  570632.722 ms (09:30.633)+0001 472938.857 ms (07:52.939) 17% better than HEAD+0002 451638.049 ms (07:31.638) 20% better than HEAD+0003 439377.813 ms (07:19.378) 23% better than HEAD+0004 96447.038 ms (01:36.447) 83% better than HEADI tested on my MacBook Pro 2.4GHz 8-Core Intel Core i9, 32 GB 2400 MHz DDR4 running macOS Big Sur 11.1:SELECT version();                                                       version----------------------------------------------------------------------------------------------------------------------PostgreSQL 14devel on x86_64-apple-darwin20.2.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit(1 row)My HEAD = 46d6e5f567906389c31c4fb3a2653da1885c18ee.PostgreSQL was compiled with just ./configure, no parameters, and the only non-default postgresql.conf settings were these:log_destination = 'csvlog'logging_collector = onlog_filename = 'postgresql.log'Amazing work!I hope to have a new dataset ready soon with regex flags for applied subjects as well./Joel", "msg_date": "Mon, 15 Feb 2021 09:21:21 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> I've tested all 4 patches successfully.\n\nThanks!\n\nI found one other area that could be improved with the same idea of\ngetting rid of unnecessary subre's: right now, every pair of capturing\nparentheses gives rise to a \"capture\" subre with an NFA identical to\nits single child subre (which is what does the actual matching work).\nWhile this doesn't really add any runtime cost, the duplicate NFA\ndefinitely does add to the compilation cost, since we run it through\noptimization independently of the child.\n\nI initially thought that we could just flush capture subres altogether\nin favor of annotating their children with a \"we need to capture this\nresult\" marker. However, Spencer's regression tests soon exposed the\nflaw in that notion. It's legal to write \"((x))\" or even \"((((x))))\",\nso that there can be multiple sets of capturing parentheses with a\nsingle child node. The solution adopted in the attached 0005 is to\nhandle the innermost capture with a marker on the child subre, but if\nwe need an additional capture on a node that's already marked, put\na capture subre on top just like before. One could instead complicate\nthe data structure by allowing N capture markers on a single subre\nnode, but I judged that not to be a good tradeoff. I don't see any\nreason except spec compliance to allow such equivalent capture groups,\nso I don't care if they're a bit inefficient. (If anyone knows of a\nuseful application for writing REs like this, we could reconsider that\nchoice.)\n\nOne small issue with marking the child directly is that we can't get\naway any longer with overlaying capture and backref subexpression\nnumbers, since you could theoretically write (\\1) which'd result in\nneeding to put a capture label on a backref subre. This could again\nhave been handled by making the capture a separate node, but I really\ndon't much care for the way that subre.subno has been overloaded for\nthree(!) different purposes depending on node type. So I just broke\nit into three separate fields. Maybe the incremental cost of the\nlarger subre struct was worth worrying about back in 1997 ... but\nI kind of doubt that it was a useful micro-optimization even then,\nconsidering the additional NFA baggage that every subre carries.\n\nAlso, I widened \"subre.id\" from short to int, since the narrower field\nno longer saves anything given the new struct layout. The existing\nchoice was dubious already, because every other use of subre ID\nnumbers was int or even size_t, and there was nothing checking for\noverflow of the id fields. (Although perhaps it doesn't matter,\nsince I'm unsure that the id fields are really used for anything\nexcept debugging purposes.)\n\nFor me, 0005 makes a fairly perceptible difference on your test case\nsubject_id = 611875, which I've been paying attention to because it's\nthe one that failed with \"regular expression is too complex\" before.\nI see about a 20% time savings from 0004 on that case, but not really\nany noticeable difference in the total runtime for the whole suite.\nSo I think we're getting to the point of diminishing returns for\nthis concept (another reason for not chasing after optimization of\nthe duplicate-captures case). Still, we're clearly way ahead of\nwhere we started.\n\nAttached is an updated patch series; it's rebased over 4e703d671\nwhich took care of some not-really-related fixes, and I made a\npass of cleanup and comment improvements. I think this is pretty\nmuch ready to commit, unless you want to do more testing or\ncode-reading.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 17 Feb 2021 16:00:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Wed, Feb 17, 2021, at 22:00, Tom Lane wrote:\n> Attached is an updated patch series; it's rebased over 4e703d671\n> which took care of some not-really-related fixes, and I made a\n> pass of cleanup and comment improvements. I think this is pretty\n> much ready to commit, unless you want to do more testing or\n> code-reading.\n\nI've produced a new dataset which now also includes the regex flags (if any) used for each subject applied to a pattern.\n\nThe new dataset contains 318364 patterns and 4474520 subjects.\n(The old one had 235204 patterns and 1489489 subjects.)\n\nI've tested the new dataset against PostgreSQL 10.16, 11.11, 12.6, 13.2, HEAD (4e703d671) and HEAD+patches.\n\nI based the comparisons on the subjects that didn't cause an error on 13.2:\n\nCREATE TABLE performance_test AS\nSELECT\n subjects.subject,\n patterns.pattern,\n patterns.flags,\n tests.is_match,\n tests.captured\nFROM tests\nJOIN subjects ON subjects.subject_id = tests.subject_id\nJOIN patterns ON patterns.pattern_id = subjects.pattern_id\nWHERE tests.error IS NULL\n;\n\nI then measured the query below for each PostgreSQL version:\n\n\\timing\nSELECT version();\nSELECT\n is_match <> (subject ~ pattern) AS is_match_diff,\n captured IS DISTINCT FROM regexp_match(subject, pattern, flags) AS captured_diff,\n COUNT(*)\nFROM performance_test\nGROUP BY 1,2\nORDER BY 1,2\n;\n\nAll versions produces the same result:\n\nis_match_diff | captured_diff | count\n---------------+---------------+---------\nf | f | 3254769\n(1 row)\n\nGood! Not a single case that differs of over 3 million different regex pattern/subject combinations,\nbetween five major PostgreSQL versions! That's a very stable regex engine.\n\nTo get a feeling for the standard deviation of the timings,\nI executed the same query above three times for each PostgreSQL version:\n\nPostgreSQL 10.16 on x86_64-apple-darwin14.5.0, compiled by Apple LLVM version 7.0.2 (clang-700.1.81), 64-bit\nTime: 795674.830 ms (13:15.675)\nTime: 794249.704 ms (13:14.250)\nTime: 771036.707 ms (12:51.037)\n\nPostgreSQL 11.11 on x86_64-apple-darwin16.7.0, compiled by Apple LLVM version 8.1.0 (clang-802.0.42), 64-bit\nTime: 765466.191 ms (12:45.466)\nTime: 787135.316 ms (13:07.135)\nTime: 779582.635 ms (12:59.583)\n\nPostgreSQL 12.6 on x86_64-apple-darwin16.7.0, compiled by Apple LLVM version 8.1.0 (clang-802.0.42), 64-bit\nTime: 785500.516 ms (13:05.501)\nTime: 784511.591 ms (13:04.512)\nTime: 786727.973 ms (13:06.728)\n\nPostgreSQL 13.2 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit\nTime: 758514.703 ms (12:38.515)\nTime: 755883.600 ms (12:35.884)\nTime: 746522.107 ms (12:26.522)\n\nPostgreSQL 14devel on x86_64-apple-darwin20.3.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit\nHEAD (4e703d671)\nTime: 519620.646 ms (08:39.621)\nTime: 518998.366 ms (08:38.998)\nTime: 519696.129 ms (08:39.696)\n\nHEAD (4e703d671)+0001+0002+0003+0004+0005\nTime: 141290.329 ms (02:21.290)\nTime: 141849.709 ms (02:21.850)\nTime: 141630.819 ms (02:21.631)\n\nThat's a mind-blowing speed-up!\n\nI also ran the more detailed test between 13.2 and HEAD+patches,\nthat also tests for differences in errors.\n\nLike before, one similar improvement was found,\nwhich previously resulted in an error, but now goes through OK:\n\nSELECT * FROM vdeviations;\n-[ RECORD 1 ]----+-------------------------------------------------------------------------------------------------------\npattern | \\.(ac|com\\.ac|edu\\.ac|gov\\.ac|net\\.ac|mi ... 100497 chars ... abs\\.org|yolasite\\.com|za\\.net|za\\.org)$\nflags |\nsubject | www.aeroexpo.online\ncount | 1\na_server_version | 13.2\na_duration | 00:00:00.298253\na_is_match |\na_captured |\na_error | invalid regular expression: regular expression is too complex\nb_server_version | 14devel\nb_duration | 00:00:00.665958\nb_is_match | t\nb_captured | {online}\nb_error |\n\nVery nice.\n\nI've uploaded the new dataset to the same place as before.\n\nThe schema for it can be found at https://github.com/truthly/regexes-in-the-wild\n\nIf anyone else would like a copy of the 715MB dataset, please let me know.\n\n/Joel\nOn Wed, Feb 17, 2021, at 22:00, Tom Lane wrote:Attached is an updated patch series; it's rebased over 4e703d671which took care of some not-really-related fixes, and I made apass of cleanup and comment improvements.  I think this is prettymuch ready to commit, unless you want to do more testing orcode-reading.I've produced a new dataset which now also includes the regex flags (if any) used for each subject applied to a pattern.The new dataset contains 318364 patterns and 4474520 subjects.(The old one had 235204 patterns and 1489489 subjects.)I've tested the new dataset against PostgreSQL 10.16, 11.11, 12.6, 13.2, HEAD (4e703d671) and HEAD+patches.I based the comparisons on the subjects that didn't cause an error on 13.2:CREATE TABLE performance_test ASSELECT  subjects.subject,  patterns.pattern,  patterns.flags,  tests.is_match,  tests.capturedFROM testsJOIN subjects ON subjects.subject_id = tests.subject_idJOIN patterns ON patterns.pattern_id = subjects.pattern_idWHERE tests.error IS NULL;I then measured the query below for each PostgreSQL version:\\timingSELECT version();SELECT  is_match <> (subject ~ pattern) AS is_match_diff,  captured IS DISTINCT FROM regexp_match(subject, pattern, flags) AS captured_diff,  COUNT(*)FROM performance_testGROUP BY 1,2ORDER BY 1,2;All versions produces the same result:is_match_diff | captured_diff |  count---------------+---------------+---------f             | f             | 3254769(1 row)Good! Not a single case that differs of over 3 million different regex pattern/subject combinations,between five major PostgreSQL versions! That's a very stable regex engine.To get a feeling for the standard deviation of the timings,I executed the same query above three times for each PostgreSQL version:PostgreSQL 10.16 on x86_64-apple-darwin14.5.0, compiled by Apple LLVM version 7.0.2 (clang-700.1.81), 64-bitTime: 795674.830 ms (13:15.675)Time: 794249.704 ms (13:14.250)Time: 771036.707 ms (12:51.037)PostgreSQL 11.11 on x86_64-apple-darwin16.7.0, compiled by Apple LLVM version 8.1.0 (clang-802.0.42), 64-bitTime: 765466.191 ms (12:45.466)Time: 787135.316 ms (13:07.135)Time: 779582.635 ms (12:59.583)PostgreSQL 12.6 on x86_64-apple-darwin16.7.0, compiled by Apple LLVM version 8.1.0 (clang-802.0.42), 64-bitTime: 785500.516 ms (13:05.501)Time: 784511.591 ms (13:04.512)Time: 786727.973 ms (13:06.728)PostgreSQL 13.2 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bitTime: 758514.703 ms (12:38.515)Time: 755883.600 ms (12:35.884)Time: 746522.107 ms (12:26.522)PostgreSQL 14devel on x86_64-apple-darwin20.3.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bitHEAD (4e703d671)Time: 519620.646 ms (08:39.621)Time: 518998.366 ms (08:38.998)Time: 519696.129 ms (08:39.696)HEAD (4e703d671)+0001+0002+0003+0004+0005Time: 141290.329 ms (02:21.290)Time: 141849.709 ms (02:21.850)Time: 141630.819 ms (02:21.631)That's a mind-blowing speed-up!I also ran the more detailed test between 13.2 and HEAD+patches,that also tests for differences in errors.Like before, one similar improvement was found,which previously resulted in an error, but now goes through OK:SELECT * FROM vdeviations;-[ RECORD 1 ]----+-------------------------------------------------------------------------------------------------------pattern          | \\.(ac|com\\.ac|edu\\.ac|gov\\.ac|net\\.ac|mi ... 100497 chars ... abs\\.org|yolasite\\.com|za\\.net|za\\.org)$flags            |subject          | www.aeroexpo.onlinecount            | 1a_server_version | 13.2a_duration       | 00:00:00.298253a_is_match       |a_captured       |a_error          | invalid regular expression: regular expression is too complexb_server_version | 14develb_duration       | 00:00:00.665958b_is_match       | tb_captured       | {online}b_error          |Very nice.I've uploaded the new dataset to the same place as before.The schema for it can be found at https://github.com/truthly/regexes-in-the-wildIf anyone else would like a copy of the 715MB dataset, please let me know./Joel", "msg_date": "Thu, 18 Feb 2021 11:30:09 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Thu, Feb 18, 2021, at 11:30, Joel Jacobson wrote:\n>SELECT * FROM vdeviations;\n>-[ RECORD 1 ]----+-------------------------------------------------------------------------------------------------------\n>pattern | \\.(ac|com\\.ac|edu\\.ac|gov\\.ac|net\\.ac|mi ... 100497 chars ... abs\\.org|yolasite\\.com|za\\.net|za\\.org)$\n\nHeh, what a funny coincidence:\nThe regex I used to shrink the very-long-pattern,\nactually happens to run a lot faster with the patches.\n\nI noticed it when trying to read from the vdeviations view in PostgreSQL 13.2.\n\nHere is my little helper-function which I used to shrink patterns/subjects longer than N characters:\n\nCREATE OR REPLACE FUNCTION shrink_text(text,integer) RETURNS text LANGUAGE sql AS $$\nSELECT CASE WHEN length($1) < $2 THEN $1 ELSE\n format('%s ... %s chars ... %s', m[1], length(m[2]), m[3])\nEND\nFROM (\n SELECT regexp_matches($1,format('^(.{1,%1$s})(.*?)(.{1,%1$s})$',$2/2)) AS m\n) AS q\n$$;\n\nThe regex aims to produce three capture groups,\nwhere I wanted the first and third ones to be greedy\nand match up to $2 characters (controlled by the second input param to the function),\nand the second capture group in the middle to be non-greedy,\nbut match the remainder to make up a fully anchored match.\n\nIt works like expected in both 13.2 and HEAD+patches, but the speed-up it enormous:\n\nPostgreSQL 13.2:\nEXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\nProjectSet (cost=0.00..0.02 rows=1 width=32) (actual time=23600.816..23600.838 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)\nPlanning Time: 0.432 ms\nExecution Time: 23600.859 ms\n(4 rows)\n\nHEAD+0001+0002+0003+0004+0005:\nEXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');\n QUERY PLAN\n-------------------------------------------------------------------------------------------\nProjectSet (cost=0.00..0.02 rows=1 width=32) (actual time=36.656..36.661 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.002 rows=1 loops=1)\nPlanning Time: 0.575 ms\nExecution Time: 36.689 ms\n(4 rows)\n\nCool stuff.\n\n/Joel\nOn Thu, Feb 18, 2021, at 11:30, Joel Jacobson wrote:>SELECT * FROM vdeviations;>-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------->pattern          | \\.(ac|com\\.ac|edu\\.ac|gov\\.ac|net\\.ac|mi ... 100497 chars ... abs\\.org|yolasite\\.com|za\\.net|za\\.org)$Heh, what a funny coincidence:The regex I used to shrink the very-long-pattern,actually happens to run a lot faster with the patches.I noticed it when trying to read from the vdeviations view in PostgreSQL 13.2.Here is my little helper-function which I used to shrink patterns/subjects longer than N characters:CREATE OR REPLACE FUNCTION shrink_text(text,integer) RETURNS text LANGUAGE sql AS $$SELECT CASE WHEN length($1) < $2 THEN $1 ELSE  format('%s ... %s chars ... %s', m[1], length(m[2]), m[3])ENDFROM (  SELECT regexp_matches($1,format('^(.{1,%1$s})(.*?)(.{1,%1$s})$',$2/2)) AS m) AS q$$;The regex aims to produce three capture groups,where I wanted the first and third ones to be greedyand match up to $2 characters (controlled by the second input param to the function),and the second capture group in the middle to be non-greedy,but match the remainder to make up a fully anchored match.It works like expected in both 13.2 and HEAD+patches, but the speed-up it enormous:PostgreSQL 13.2:EXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');                                           QUERY PLAN-------------------------------------------------------------------------------------------------ProjectSet  (cost=0.00..0.02 rows=1 width=32) (actual time=23600.816..23600.838 rows=1 loops=1)   ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.001..0.002 rows=1 loops=1)Planning Time: 0.432 msExecution Time: 23600.859 ms(4 rows)HEAD+0001+0002+0003+0004+0005:EXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');                                        QUERY PLAN-------------------------------------------------------------------------------------------ProjectSet  (cost=0.00..0.02 rows=1 width=32) (actual time=36.656..36.661 rows=1 loops=1)   ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.002 rows=1 loops=1)Planning Time: 0.575 msExecution Time: 36.689 ms(4 rows)Cool stuff./Joel", "msg_date": "Thu, 18 Feb 2021 12:04:55 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n>> I've produced a new dataset which now also includes the regex flags (if\n>> any) used for each subject applied to a pattern.\n\nAgain, thanks for collecting this data! I'm a little confused about\nhow you produced the results in the \"tests\" table, though. It sort\nof looks like you tried to feed the Javascript flags to regexp_match(),\nwhich unsurprisingly doesn't work all that well. Even discounting\nthat, I'm not getting quite the same results, and I don't understand\nwhy not. So how was that made from the raw \"patterns\" and \"subjects\"\ntables?\n\n> PostgreSQL 13.2 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit\n> Time: 758514.703 ms (12:38.515)\n> Time: 755883.600 ms (12:35.884)\n> Time: 746522.107 ms (12:26.522)\n> \n> PostgreSQL 14devel on x86_64-apple-darwin20.3.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit\n> HEAD (4e703d671)\n> Time: 519620.646 ms (08:39.621)\n> Time: 518998.366 ms (08:38.998)\n> Time: 519696.129 ms (08:39.696)\n\nHmmm ... we haven't yet committed any performance-relevant changes to the\nregex code, so it can't take any credit for this improvement from 13.2 to\nHEAD. I speculate that this is due to some change in our parallelism\nstuff (since I observe that this query is producing a parallelized hash\nplan). Still, the next drop to circa 2:21 runtime is impressive enough\nby itself.\n\n> Heh, what a funny coincidence:\n> The regex I used to shrink the very-long-pattern,\n> actually happens to run a lot faster with the patches.\n\nYeah, that just happens to be a poster child for the MATCHALL idea:\n\n> EXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');\n\nEach of the parenthesized subexpressions of the RE is successfully\nrecognized as being MATCHALL, with length range 1..80 for two of them and\n0..infinity for the middle one. That means the engine doesn't have to\nphysically scan the text to determine whether a possible division point\nsatisfies the sub-regexp; and that means we can find the correct division\npoints in O(N) not O(N^2) time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Feb 2021 13:10:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "I thought it was worth looking a little more closely at the error\ncases in this set of tests, as a form of competition analysis versus\nJavascript's regex engine. I ran through the cases that gave errors,\nand pinned down exactly what was causing the error for as many cases\nas I could. (These results are from your first test corpus, but\nI doubt the second one would give different conclusions.)\n\nWe have these errors reported in the test corpus:\n\n error | count \n-----------------------------------+-------\n invalid escape \\ sequence | 39141\n invalid character range | 898\n invalid backreference number | 816\n brackets [] not balanced | 327\n invalid repetition count(s) | 76\n quantifier operand invalid | 17\n parentheses () not balanced | 1\n regular expression is too complex | 1\n\nThe existing patchset takes care of the one \"regular expression is too\ncomplex\" failure. Of the rest:\n\nIt turns out that almost 39000 of the \"invalid escape \\ sequence\"\nerrors are due to use of \\D, \\S, or \\W within a character class.\nWe support the positive-class shorthands \\d, \\s, \\w there, but not\ntheir negations. I think that this might be something that Henry\nSpencer just never got around to; I don't see any fundamental reason\nwe can't allow it, although some refactoring might be needed in the\nregex lexer. Given the apparent popularity of this notation, maybe\nwe should put some work into that.\n\n(Having said that, I can't help noticing that a very large fraction\nof those usages look like, eg, \"[\\w\\W]\". It seems to me that that's\na very expensive and unwieldy way to spell \".\". Am I missing\nsomething about what that does in Javascript?)\n\nAbout half of the remaining escape-sequence complaints seem to be due\nto just randomly backslashing alphanumeric characters that don't need\nit, as for example \"i\" in \"\\itunes\\.apple\\.com\". Apparently\nJavascript is content to take \"\\i\" as just meaning \"i\". Our engine\nrejects that, with a view to keeping such combinations reserved for\nfuture definition. That's fine by me so I don't want to change it.\n\nOf the rest, many are abbreviated numeric escapes, eg \"\\u45\" where our\nengine wants to see \"\\u0045\". I don't think being laxer about that\nwould be a great idea either.\n\nLastly, there are some occurrences like \"[\\1]\", which in context look\nlike the \\1 might be intended as a back-reference? But I don't really\nunderstand what that's supposed to do inside a bracket expression.\n\nThe \"invalid character range\" errors seem to be coming from constructs\nlike \"[A-Za-z0-9-/]\", which our engine rejects because it looks like\na messed-up character range.\n\nAll but 123 of the \"invalid backreference number\" complaints stem from\nusing backrefs inside lookahead constraints. Some of the rest look\nlike they think you can put capturing parens inside a lookahead\nconstraint and then backref that. I'm not really convinced that such\nconstructs have a well-defined meaning. (I looked at the ECMAscript\ndefinition of regexes, and they do say it's allowed, but when trying\nto define it they resort to handwaving about backtracking; at best that\nis a particularly lame version of specification by implementation.)\nSpencer chose to forbid these cases in our engine, and I think there\nare very good implementation reasons why it won't work. Perhaps we\ncould provide a clearer error message about it, though.\n\n307 of the \"brackets [] not balanced\" errors, as well as the one\n\"parentheses () not balanced\" error, seem to trace to the fact that\nJavascript considers \"[]\" to be a legal empty character class, whereas\nPOSIX doesn't allow empty character classes so our engine takes the\n\"]\" literally, and then looks for a right bracket it won't find.\n(That is, in POSIX \"[]x]\" is a character class matching ']' and 'x'.)\nMaybe I'm misinterpreting this too, because if I read the\ndocumentation correctly, \"[]\" in Javascript matches nothing, making\nit impossible for the regex to succeed. Why would such a construct\nappear this often?\n\nThe remainder of the bracket errors happen because in POSIX, the\nsequences \"[:\", \"[=\", and \"[.\" within a bracket expression introduce\nspecial syntax, whereas in Javascript '[' is just an ordinary data\ncharacter within a bracket expression. Not much we can do here; the\nstandards are just incompatible.\n\nAll but 3 of the \"invalid repetition count(s)\" errors come from\nquantifiers larger than our implementation limit of 255. A lot of\nthose are exactly 256, though I saw one as high as 3000. The\nremaining 3 errors are from syntax like \"[0-9]{0-3}\", which is a\nsyntax error according to our engine (\"[0-9]{0,3}\" is correct).\nAFAICT it's not a valid quantifier according to Javascript either;\nperhaps that engine is just taking the \"{0-3}\" as literal text?\n\nGiven this, it seems like there's a fairly strong case for increasing\nour repetition-count implementation limit, at least to 256, and maybe\n1000 or so. I'm hesitant to make the limit *really* large, but if\nwe can handle a regex containing thousands of \"x\"'s, it's not clear\nwhy you shouldn't be able to write that as \"x{0,1000}\".\n\nAll of the \"quantifier operand invalid\" errors come from these\nthree patterns:\n\t((?!\\\\)?\\{0(?!\\\\)?\\})\n\t((?!\\\\)?\\{1(?!\\\\)?\\})\n\tclass=\"(?!(tco-hidden|tco-display|tco-ellipsis))+.*?\"|data-query-source=\".*?\"|dir=\".*?\"|rel=\".*?\"\nwhich are evidently trying to apply a quantifier to a lookahead\nconstraint, which is just silly.\n\nIn short, a lot of this is from incompatible standards, or maybe\nfrom varying ideas about whether to throw an error for invalid\nconstructs. But I see a couple things we could improve.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Feb 2021 13:53:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Thu, Feb 18, 2021, at 19:10, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> >> I've produced a new dataset which now also includes the regex flags (if\n> >> any) used for each subject applied to a pattern.\n> \n> Again, thanks for collecting this data! I'm a little confused about\n> how you produced the results in the \"tests\" table, though. It sort\n> of looks like you tried to feed the Javascript flags to regexp_match(),\n> which unsurprisingly doesn't work all that well.\n\nThat's exactly what I did. Some of the flags work the same between Javascript and PostgreSQL, others don't.\n\nI thought maybe something interesting would surface in just trying them blindly.\n\nFlags that aren't supported and gives errors are reported as tests where error is not null.\n\nMost patterns have no flags, and second most popular is just the \"i\" flag, which should work the same.\n\nSELECT flags, COUNT(*) FROM patterns GROUP BY 1 ORDER BY 2 DESC;\nflags | count\n-------+--------\n | 151927\ni | 120336\ngi | 26057\ng | 13263\ngm | 4606\ngim | 699\nim | 491\ny | 367\nm | 365\ngy | 105\nu | 50\ngiy | 38\ngiu | 20\ngimu | 14\niy | 11\niu | 6\ngimy | 3\ngu | 2\ngmy | 2\nimy | 1\nmy | 1\n(21 rows)\n\nThis query shows what Javascript-regex-flags that could be used as-is without errors:\n\nSELECT\n patterns.flags,\n COUNT(*)\nFROM tests\nJOIN subjects ON subjects.subject_id = tests.subject_id\nJOIN patterns ON patterns.pattern_id = subjects.pattern_id\nWHERE tests.error IS NULL\nGROUP BY 1\nORDER BY 2;\n\nflags | count\n-------+---------\nim | 2534\nm | 4460\ni | 543598\n | 2704177\n(4 rows)\n\nI considered filtering/converting the flags to PostgreSQL,\nmaybe that would be an interesting approach to try as well.\n\n> \n> Even discounting\n> that, I'm not getting quite the same results, and I don't understand\n> why not. So how was that made from the raw \"patterns\" and \"subjects\"\n> tables?\n\nThe rows in the tests table were generated by the create_regexp_tests() function [1]\n\nEach subject now has a foreign key to a specific pattern,\nwhere the (pattern, flags) combination are unique in patterns.\nThe actual unique constraint is on (pattern_hash, flags) to avoid\nan index directly on pattern which can be huge as we've seen.\n\nSo, for each subject, it is known via the pattern_id\nexactly what flags were used when the regex was compiled\n(and later executed/applied with the subject).\n\n[1] https://github.com/truthly/regexes-in-the-wild/blob/master/create_regexp_tests.sql\n\n> \n> > PostgreSQL 13.2 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit\n> > Time: 758514.703 ms (12:38.515)\n> > Time: 755883.600 ms (12:35.884)\n> > Time: 746522.107 ms (12:26.522)\n> > \n> > PostgreSQL 14devel on x86_64-apple-darwin20.3.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit\n> > HEAD (4e703d671)\n> > Time: 519620.646 ms (08:39.621)\n> > Time: 518998.366 ms (08:38.998)\n> > Time: 519696.129 ms (08:39.696)\n> \n> Hmmm ... we haven't yet committed any performance-relevant changes to the\n> regex code, so it can't take any credit for this improvement from 13.2 to\n> HEAD. I speculate that this is due to some change in our parallelism\n> stuff (since I observe that this query is producing a parallelized hash\n> plan). Still, the next drop to circa 2:21 runtime is impressive enough\n> by itself.\n\nOK. Another factor might perhaps be the PostgreSQL 10, 11, 12, 13 versions were compiled elsewhere,\nI used the OS X binaries from https://postgresapp.com/, whereas version 14 I of course compiled myself.\nMaybe I should have compiled 10, 11, 12, 13 myself instead, for a better comparison,\nbut I mostly just wanted to verify if I could find any differences, the performance comparison was a bonus.\n\n> \n> > Heh, what a funny coincidence:\n> > The regex I used to shrink the very-long-pattern,\n> > actually happens to run a lot faster with the patches.\n> \n> Yeah, that just happens to be a poster child for the MATCHALL idea:\n> \n> > EXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');\n> \n> Each of the parenthesized subexpressions of the RE is successfully\n> recognized as being MATCHALL, with length range 1..80 for two of them and\n> 0..infinity for the middle one. That means the engine doesn't have to\n> physically scan the text to determine whether a possible division point\n> satisfies the sub-regexp; and that means we can find the correct division\n> points in O(N) not O(N^2) time.\n\nVery nice.\n\nLike you said earlier, perhaps the regex engine has been optimized enough for this time.\nIf not, you want to investigate an additional idea,\nthat I think can be seen as a generalization of the optimization trick for (.*),\nif I've understood how it works correctly.\n\nLet's see if I can explain the idea:\n\nOne of the problems with representing regexes with large bracket range expressions, like [a-z],\nis you get an explosion of edges, if the model can only represent state transitions for single characters.\n\nIf we could instead let a single edge (for a state transition) represent a set of characters,\nor normally even more efficiently, a set of range of characters, then we could reduce the\nnumber of edges we need to represent the graph.\n\nThe naive approach to just use the ranges as-is doesn't work.\n\nInstead, the graph must first be created with single-character edges.\n\nIt is then examined what ranges can be constructed in a way that no single range\noverlaps any other range, so that every range can be seen as a character in an alphabet.\n\nPerhaps a bit of fiddling with some examples is easiest\nto get a grip of the idea.\n\nHere is a live demo of the idea:\nhttps://compiler.org/reason-re-nfa/src/index.html\n\nThe graphs are rendered live when typing in the regex,\nusing a Javascript port of GraphViz.\n\nFor example, try entering the regex: t[a-z]*m\n\nThis generates this range-optimized graph for the regex:\n\n /--[a-ln-su-z]-----------------\\\n |/------t--------------------\\ |\n || | |\n-->(0)--t-->({0,1})----m-------->({0 1 2}) | |\n ^---[a-ln-su-z]--/ | |\n ^-------t-------/ | |\n ^---------------------------/ |\n ^-----------------------------/\nNotice how the [a-z] bracket expression has been split up,\nand we now have 3 distinct set of \"ranges\":\nt\nm\n[a-ln-su-z]\n\nSince no ranges are overlapping, each such range can safely be seen as a letter in an alphabet.\n\nOnce we have our final graph, but before we proceed to generate the machine code for it,\nwe can shrink the graph further by merging ranges together, which eliminate some edges:\n\n /--------------\\\n | |\n--->(0)--t-->(1)<--[a-ln-z]--/\n |^-[a-lnz]-\\\n \\----m-->((2))<----\\\n | |\n \\---m---/\n\nNotice how [a-ln-su-z]+t becomes [a-ln-z].\n\nAnother optimization I've come up with (or probably re-invented because it feels quite obvious),\nis to read more than one character, when knowing for sure multiple characters-in-a-row\nare expected, by concatenating edges having only one parent and one child.\n\nIn our example, we know for sure at least two characters will be read for the regex t[a-z]*m,\nso with this optimization enabled, we get this graph:\n\n /--[a-ln-z]\n | |\n--->(0)---t[a-ln-z]--->(1)<---+--[a-ln-z]\n | | /\n | \\---m--->((2))<------\\\n \\--------------tm------------^ | |\n \\----m----/\n\n\nThis makes not much difference for a few characters,\nbut if we have a long pattern with a long sentence\nthat is repeated, we could e.g. read in 32 bytes\nand compare them all in one operation,\nif our machine had 256-bits SIMD registers/instructions.\n\nThis idea has also been implemented in the online demo.\n\nThere is a level which can be adjusted\nfrom 0 to 32 to control how many bytes to merge at most,\nlocated in the \"[+]dfa5 = merge_linear(dfa4)\" step.\n\nAnyway, I can totally understand if you've had enough of regex optimizations for this time,\nbut in case not, I wanted to share my work in this field, in case it's interesting to look at now or in the future.\n\n/Joel\nOn Thu, Feb 18, 2021, at 19:10, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:>> I've produced a new dataset which now also includes the regex flags (if>> any) used for each subject applied to a pattern.Again, thanks for collecting this data!  I'm a little confused abouthow you produced the results in the \"tests\" table, though.  It sortof looks like you tried to feed the Javascript flags to regexp_match(),which unsurprisingly doesn't work all that well.That's exactly what I did. Some of the flags work the same between Javascript and PostgreSQL, others don't.I thought maybe something interesting would surface in just trying them blindly.Flags that aren't supported and gives errors are reported as tests where error is not null.Most patterns have no flags, and second most popular is just the \"i\" flag, which should work the same.SELECT flags, COUNT(*) FROM patterns GROUP BY 1 ORDER BY 2 DESC;flags | count-------+--------       | 151927i     | 120336gi    |  26057g     |  13263gm    |   4606gim   |    699im    |    491y     |    367m     |    365gy    |    105u     |     50giy   |     38giu   |     20gimu  |     14iy    |     11iu    |      6gimy  |      3gu    |      2gmy   |      2imy   |      1my    |      1(21 rows)This query shows what Javascript-regex-flags that could be used as-is without errors:SELECT  patterns.flags,  COUNT(*)FROM testsJOIN subjects ON subjects.subject_id = tests.subject_idJOIN patterns ON patterns.pattern_id = subjects.pattern_idWHERE tests.error IS NULLGROUP BY 1ORDER BY 2;flags |  count-------+---------im    |    2534m     |    4460i     |  543598       | 2704177(4 rows)I considered filtering/converting the flags to PostgreSQL,maybe that would be an interesting approach to try as well.Even discountingthat, I'm not getting quite the same results, and I don't understandwhy not.  So how was that made from the raw \"patterns\" and \"subjects\"tables?The rows in the tests table were generated by the create_regexp_tests() function [1]Each subject now has a foreign key to a specific pattern,where the (pattern, flags) combination are unique in patterns.The actual unique constraint is on (pattern_hash, flags) to avoidan index directly on pattern which can be huge as we've seen.So, for each subject, it is known via the pattern_idexactly what flags were used when the regex was compiled(and later executed/applied with the subject).[1] https://github.com/truthly/regexes-in-the-wild/blob/master/create_regexp_tests.sql> PostgreSQL 13.2 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.62), 64-bit> Time: 758514.703 ms (12:38.515)> Time: 755883.600 ms (12:35.884)> Time: 746522.107 ms (12:26.522)> > PostgreSQL 14devel on x86_64-apple-darwin20.3.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit> HEAD (4e703d671)> Time: 519620.646 ms (08:39.621)> Time: 518998.366 ms (08:38.998)> Time: 519696.129 ms (08:39.696)Hmmm ... we haven't yet committed any performance-relevant changes to theregex code, so it can't take any credit for this improvement from 13.2 toHEAD.  I speculate that this is due to some change in our parallelismstuff (since I observe that this query is producing a parallelized hashplan).  Still, the next drop to circa 2:21 runtime is impressive enoughby itself.OK. Another factor might perhaps be the PostgreSQL 10, 11, 12, 13 versions were compiled elsewhere,I used the OS X binaries from https://postgresapp.com/, whereas version 14 I of course compiled myself.Maybe I should have compiled 10, 11, 12, 13 myself instead, for a better comparison,but I mostly just wanted to verify if I could find any differences, the performance comparison was a bonus.> Heh, what a funny coincidence:> The regex I used to shrink the very-long-pattern,> actually happens to run a lot faster with the patches.Yeah, that just happens to be a poster child for the MATCHALL idea:> EXPLAIN ANALYZE SELECT regexp_matches(repeat('a',100000),'^(.{1,80})(.*?)(.{1,80})$');Each of the parenthesized subexpressions of the RE is successfullyrecognized as being MATCHALL, with length range 1..80 for two of them and0..infinity for the middle one.  That means the engine doesn't have tophysically scan the text to determine whether a possible division pointsatisfies the sub-regexp; and that means we can find the correct divisionpoints in O(N) not O(N^2) time.Very nice.Like you said earlier, perhaps the regex engine has been optimized enough for this time.If not, you want to investigate an additional idea,that I think can be seen as a generalization of the optimization trick for (.*),if I've understood how it works correctly.Let's see if I can explain the idea:One of the problems with representing regexes with large bracket range expressions, like [a-z],is you get an explosion of edges, if the model can only represent state transitions for single characters.If we could instead let a single edge (for a state transition) represent a set of characters,or normally even more efficiently, a set of range of characters, then we could reduce thenumber of edges we need to represent the graph.The naive approach to just use the ranges as-is doesn't work.Instead, the graph must first be created with single-character edges.It is then examined what ranges can be constructed in a way that no single rangeoverlaps any other range, so that every range can be seen as a character in an alphabet.Perhaps a bit of fiddling with some examples is easiestto get a grip of the idea.Here is a live demo of the idea:https://compiler.org/reason-re-nfa/src/index.htmlThe graphs are rendered live when typing in the regex,using a Javascript port of GraphViz.For example, try entering the regex: t[a-z]*mThis generates this range-optimized graph for the regex:              /--[a-ln-su-z]-----------------\\              |/------t--------------------\\ |              ||                           | |-->(0)--t-->({0,1})----m-------->({0 1 2}) | |               ^---[a-ln-su-z]--/          | |               ^-------t-------/           | |               ^---------------------------/ |               ^-----------------------------/Notice how the [a-z] bracket expression has been split up,and we now have 3 distinct set of \"ranges\":tm[a-ln-su-z]Since no ranges are overlapping, each such range can safely be seen as a letter in an alphabet.Once we have our final graph, but before we proceed to generate the machine code for it,we can shrink the graph further by merging ranges together, which eliminate some edges:              /--------------\\              |              |--->(0)--t-->(1)<--[a-ln-z]--/              |^-[a-lnz]-\\              \\----m-->((2))<----\\                         |       |                         \\---m---/Notice how [a-ln-su-z]+t becomes [a-ln-z].Another optimization I've come up with (or probably re-invented because it feels quite obvious),is to read more than one character, when knowing for sure multiple characters-in-a-roware expected, by concatenating edges having only one parent and one child.In our example, we know for sure at least two characters will be read for the regex t[a-z]*m,so with this optimization enabled, we get this graph:                        /--[a-ln-z]                        |     |--->(0)---t[a-ln-z]--->(1)<---+--[a-ln-z]     |                  |             /     |                   \\---m--->((2))<------\\     \\--------------tm------------^ |         |                                    \\----m----/This makes not much difference for a few characters,but if we have a long pattern with a long sentencethat is repeated, we could e.g. read in 32 bytesand compare them all in one operation,if our machine had 256-bits SIMD registers/instructions.This idea has also been implemented in the online demo.There is a level which can be adjustedfrom 0 to 32 to control how many bytes to merge at most,located in the \"[+]dfa5 = merge_linear(dfa4)\" step.Anyway, I can totally understand if you've had enough of regex optimizations for this time,but in case not, I wanted to share my work in this field, in case it's interesting to look at now or in the future./Joel", "msg_date": "Thu, 18 Feb 2021 20:58:07 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Thu, Feb 18, 2021, at 20:58, Joel Jacobson wrote:\n>Like you said earlier, perhaps the regex engine has been optimized enough for this time.\n>If not, you want to investigate an additional idea,\n\nIn the above sentence, I meant \"you _may_ want to\".\nI'm not at all sure these idea are applicable in the PostgreSQL regex engine,\nso feel free to silently ignore these if you feel there is a risk for time waste.\n\n>that I think can be seen as a generalization of the optimization trick for (.*),\n>if I've understood how it works correctly.\n\nActually not sure if it can be seen as a generalization,\nI just came to think of my ideas since they also improve\nthe case when you have lots of (.*) or bracket expressions of large ranges.\n\n/Joel\nOn Thu, Feb 18, 2021, at 20:58, Joel Jacobson wrote:>Like you said earlier, perhaps the regex engine has been optimized enough for this time.>If not, you want to investigate an additional idea,In the above sentence, I meant \"you _may_ want to\".I'm not at all sure these idea are applicable in the PostgreSQL regex engine,so feel free to silently ignore these if you feel there is a risk for time waste.>that I think can be seen as a generalization of the optimization trick for (.*),>if I've understood how it works correctly.Actually not sure if it can be seen as a generalization,I just came to think of my ideas since they also improvethe case when you have lots of (.*) or bracket expressions of large ranges./Joel", "msg_date": "Thu, 18 Feb 2021 21:44:07 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Let's see if I can explain the idea:\n> One of the problems with representing regexes with large bracket range expressions, like [a-z],\n> is you get an explosion of edges, if the model can only represent state transitions for single characters.\n> If we could instead let a single edge (for a state transition) represent a set of characters,\n> or normally even more efficiently, a set of range of characters, then we could reduce the\n> number of edges we need to represent the graph.\n> The naive approach to just use the ranges as-is doesn't work.\n> Instead, the graph must first be created with single-character edges.\n> It is then examined what ranges can be constructed in a way that no single range\n> overlaps any other range, so that every range can be seen as a character in an alphabet.\n\nHmm ... I might be misunderstanding, but I think our engine already\ndoes a version of this. See the discussion of \"colors\" in\nsrc/backend/regex/README.\n\n> Another optimization I've come up with (or probably re-invented because it feels quite obvious),\n> is to read more than one character, when knowing for sure multiple characters-in-a-row\n> are expected, by concatenating edges having only one parent and one child.\n\nMaybe. In practice the actual scanning tends to be tracking more than one\npossible NFA state in parallel, so I'm not sure how often we could expect\nto be able to use this idea. That is, even if we know that state X can\nonly succeed by following an arc to Y and then another to Z, we might\nalso be interested in what happens if the NFA is in state Q at this point;\nand it seems unlikely that Q would have exactly the same two following\narc colors.\n\nI do have some ideas about possible future optimizations, and one reason\nI'm grateful for this large set of real regexes is that it can provide a\nconcrete basis for deciding that particular optimizations are or are not\nworth pursuing. So thanks again for collecting it!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Feb 2021 15:44:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Thu, Feb 18, 2021, at 21:44, Tom Lane wrote:\n>Hmm ... I might be misunderstanding, but I think our engine already\n>does a version of this. See the discussion of \"colors\" in\n>src/backend/regex/README.\n\nThanks, I will read it with great interest.\n\n>Maybe. In practice the actual scanning tends to be tracking more than one\n>possible NFA state in parallel, so I'm not sure how often we could expect\n>to be able to use this idea. That is, even if we know that state X can\n>only succeed by following an arc to Y and then another to Z, we might\n>also be interested in what happens if the NFA is in state Q at this point;\n>and it seems unlikely that Q would have exactly the same two following\n>arc colors.\n\nRight. Actually I don't have a clear idea on how it could be implemented in an NFA engine.\n\n>I do have some ideas about possible future optimizations, and one reason\n>I'm grateful for this large set of real regexes is that it can provide a\n>concrete basis for deciding that particular optimizations are or are not\n>worth pursuing. So thanks again for collecting it!\n\nMy pleasure. Thanks for using it!\n\n/Joel\nOn Thu, Feb 18, 2021, at 21:44, Tom Lane wrote:>Hmm ... I might be misunderstanding, but I think our engine already>does a version of this.  See the discussion of \"colors\" in>src/backend/regex/README.Thanks, I will read it with great interest.>Maybe.  In practice the actual scanning tends to be tracking more than one>possible NFA state in parallel, so I'm not sure how often we could expect>to be able to use this idea.  That is, even if we know that state X can>only succeed by following an arc to Y and then another to Z, we might>also be interested in what happens if the NFA is in state Q at this point;>and it seems unlikely that Q would have exactly the same two following>arc colors.Right. Actually I don't have a clear idea on how it could be implemented in an NFA engine.>I do have some ideas about possible future optimizations, and one reason>I'm grateful for this large set of real regexes is that it can provide a>concrete basis for deciding that particular optimizations are or are not>worth pursuing.  So thanks again for collecting it!My pleasure. Thanks for using it!/Joel", "msg_date": "Thu, 18 Feb 2021 21:54:38 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Thu, Feb 18, 2021, at 19:53, Tom Lane wrote:\n>(Having said that, I can't help noticing that a very large fraction\n>of those usages look like, eg, \"[\\w\\W]\". It seems to me that that's\n>a very expensive and unwieldy way to spell \".\". Am I missing\n>something about what that does in Javascript?)\n\nThis popular regex\n\n ^(?:\\s*(<[\\w\\W]+>)[^>]*|#([\\w-]+))$\n\nis coming from jQuery:\n\n// A simple way to check for HTML strings\n// Prioritize #id over <tag> to avoid XSS via location.hash (#9521)\n// Strict HTML recognition (#11290: must start with <)\n// Shortcut simple #id case for speed\nrquickExpr = /^(?:\\s*(<[\\w\\W]+>)[^>]*|#([\\w-]+))$/,\n\nFrom: https://code.jquery.com/jquery-3.5.1.js\n\nI think this is a non-POSIX hack to match any character, including newlines,\nwhich are not included unless the \"s\" flag is set.\n\nJavascript test:\n\n\"foo\\nbar\".match(/(.+)/)[1];\n\"foo\"\n\n\"foo\\nbar\".match(/(.+)/s)[1];\n\"foo\nbar\"\n\n\"foo\\nbar\".match(/([\\w\\W]+)/)[1];\n\"foo\nbar\"\n\n/Joel\nOn Thu, Feb 18, 2021, at 19:53, Tom Lane wrote:>(Having said that, I can't help noticing that a very large fraction>of those usages look like, eg, \"[\\w\\W]\".  It seems to me that that's>a very expensive and unwieldy way to spell \".\".  Am I missing>something about what that does in Javascript?)This popular regex    ^(?:\\s*(<[\\w\\W]+>)[^>]*|#([\\w-]+))$is coming from jQuery:// A simple way to check for HTML strings// Prioritize #id over <tag> to avoid XSS via location.hash (#9521)// Strict HTML recognition (#11290: must start with <)// Shortcut simple #id case for speedrquickExpr = /^(?:\\s*(<[\\w\\W]+>)[^>]*|#([\\w-]+))$/,From: https://code.jquery.com/jquery-3.5.1.jsI think this is a non-POSIX hack to match any character, including newlines,which are not included unless the \"s\" flag is set.Javascript test:\"foo\\nbar\".match(/(.+)/)[1];\"foo\"\"foo\\nbar\".match(/(.+)/s)[1];\"foobar\"\"foo\\nbar\".match(/([\\w\\W]+)/)[1];\"foobar\"/Joel", "msg_date": "Fri, 19 Feb 2021 13:45:34 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Thu, Feb 18, 2021, at 19:53, Tom Lane wrote:\n>> (Having said that, I can't help noticing that a very large fraction\n>> of those usages look like, eg, \"[\\w\\W]\". It seems to me that that's\n>> a very expensive and unwieldy way to spell \".\". Am I missing\n>> something about what that does in Javascript?)\n\n> I think this is a non-POSIX hack to match any character, including newlines,\n> which are not included unless the \"s\" flag is set.\n\n> \"foo\\nbar\".match(/([\\w\\W]+)/)[1];\n> \"foo\n> bar\"\n\nOooh, that's very interesting. I guess the advantage of that over using\nthe 's' flag is that you can have different behaviors at different places\nin the same regex.\n\nI was just wondering about this last night in fact, while hacking on\nthe code to get it to accept \\W etc in bracket expressions. I see that\nright now, our code thinks that NLSTOP mode ('n' switch, the opposite\nof 's') should cause \\W \\D \\S to not match newline. That seems a little\nweird, not least because \\S should probably be different from the other\ntwo, and it isn't. And now we see it'd mean that you couldn't use the 'n'\nswitch to duplicate Javascript's default behavior in this area. Should we\nchange it? (I wonder what Perl does.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Feb 2021 10:26:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Fri, Feb 19, 2021, at 16:26, Tom Lane wrote:\n>\"Joel Jacobson\" <joel@compiler.org> writes:\n>> On Thu, Feb 18, 2021, at 19:53, Tom Lane wrote:\n>>> (Having said that, I can't help noticing that a very large fraction\n>>> of those usages look like, eg, \"[\\w\\W]\". It seems to me that that's\n>>> a very expensive and unwieldy way to spell \".\". Am I missing\n>>> something about what that does in Javascript?)\n>\n>> I think this is a non-POSIX hack to match any character, including newlines,\n>> which are not included unless the \"s\" flag is set.\n>\n>> \"foo\\nbar\".match(/([\\w\\W]+)/)[1];\n>> \"foo\n>> bar\"\n>\n>Oooh, that's very interesting. I guess the advantage of that over using\n>the 's' flag is that you can have different behaviors at different places\n>in the same regex.\n\nI would guess the same thing.\n\n>I was just wondering about this last night in fact, while hacking on\n>the code to get it to accept \\W etc in bracket expressions. I see that\n>right now, our code thinks that NLSTOP mode ('n' switch, the opposite\n>of 's') should cause \\W \\D \\S to not match newline. That seems a little\n>weird, not least because \\S should probably be different from the other\n>two, and it isn't. And now we see it'd mean that you couldn't use the 'n'\n>switch to duplicate Javascript's default behavior in this area. Should we\n>change it? (I wonder what Perl does.)\n>\n>regards, tom lane\n\nTo allow comparing PostgreSQL vs Javascript vs Perl,\nI installed three helper-functions using plv8 and plperl,\nand also one convenience function for PostgreSQL\nto catch errors and return the error string instead:\n\nThe string used in this test is \"foo!\\n!bar\",\nwhich aims to detect differences in how new-lines\nand non alpha-number characters are handled.\n\nTo allow PostgreSQL to be compared with Javascript and Perl,\nthe \"n\" flag is used for PostgreSQL when no flags are used for Javascript/Perl,\nand no flag for PostgreSQL when the \"s\" flag is used for Javascript/Perl,\nfor the results to be comparable.\n\nIn Javascript, when a regex contains capture groups, the entire match\nis always returns as the first array element.\nTo make it easier to visually compare the results,\nthe first element is removed from Javascript,\nwhich works in this test since all regexes contain\nexactly one capture group.\n\nHere are the results:\n\n$ psql -e -f not_alnum.sql regex\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '(.+)', 'n'),\n (regexp_match_v8(E'foo!\\n!bar', '(.+)', ''))[2:],\n regexp_match_pl(E'foo!\\n!bar', '(.+)', '')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{foo!} | {foo!} | {foo!}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '(.+)', ''),\n (regexp_match_v8(E'foo!\\n!bar', '(.+)', 's'))[2:],\n regexp_match_pl(E'foo!\\n!bar', '(.+)', 's')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{\"foo! +| {\"foo! +| {\"foo! +\n!bar\"} | !bar\"} | !bar\"}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '([\\w\\W]+)', 'n'),\n (regexp_match_v8(E'foo!\\n!bar', '([\\w\\W]+)', ''))[2:],\n regexp_match_pl(E'foo!\\n!bar', '([\\w\\W]+)', '')\n;\n regexp_match_pg | regexp_match_v8 | regexp_match_pl\n------------------------------------------------------------+-----------------+-----------------\n{\"invalid regular expression: invalid escape \\\\ sequence\"} | {\"foo! +| {\"foo! +\n | !bar\"} | !bar\"}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '([\\w\\W]+)', ''),\n (regexp_match_v8(E'foo!\\n!bar', '([\\w\\W]+)', 's'))[2:],\n regexp_match_pl(E'foo!\\n!bar', '([\\w\\W]+)', 's')\n;\n regexp_match_pg | regexp_match_v8 | regexp_match_pl\n------------------------------------------------------------+-----------------+-----------------\n{\"invalid regular expression: invalid escape \\\\ sequence\"} | {\"foo! +| {\"foo! +\n | !bar\"} | !bar\"}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '([\\w]+)', 'n'),\n (regexp_match_v8(E'foo!\\n!bar', '([\\w]+)', ''))[2:],\n regexp_match_pl(E'foo!\\n!bar', '([\\w]+)', '')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{foo} | {foo} | {foo}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '([\\w]+)', ''),\n (regexp_match_v8(E'foo!\\n!bar', '([\\w]+)', 's'))[2:],\n regexp_match_pl(E'foo!\\n!bar', '([\\w]+)', 's')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{foo} | {foo} | {foo}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '([\\W]+)', 'n'),\n (regexp_match_v8(E'foo!\\n!bar', '([\\W]+)', ''))[2:],\n regexp_match_pl(E'foo!\\n!bar', '([\\W]+)', '')\n;\n regexp_match_pg | regexp_match_v8 | regexp_match_pl\n------------------------------------------------------------+-----------------+-----------------\n{\"invalid regular expression: invalid escape \\\\ sequence\"} | {\"! +| {\"! +\n | !\"} | !\"}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '([\\W]+)', ''),\n (regexp_match_v8(E'foo!\\n!bar', '([\\W]+)', 's'))[2:],\n regexp_match_pl(E'foo!\\n!bar', '([\\W]+)', 's')\n;\n regexp_match_pg | regexp_match_v8 | regexp_match_pl\n------------------------------------------------------------+-----------------+-----------------\n{\"invalid regular expression: invalid escape \\\\ sequence\"} | {\"! +| {\"! +\n | !\"} | !\"}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '(\\w+)', 'n'),\n (regexp_match_v8(E'foo!\\n!bar', '(\\w+)', ''))[2:],\n regexp_match_pl(E'foo!\\n!bar', '(\\w+)', '')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{foo} | {foo} | {foo}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '(\\w+)', ''),\n (regexp_match_v8(E'foo!\\n!bar', '(\\w+)', 's'))[2:],\n regexp_match_pl(E'foo!\\n!bar', '(\\w+)', 's')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{foo} | {foo} | {foo}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '(\\W+)', 'n'),\n (regexp_match_v8(E'foo!\\n!bar', '(\\W+)', ''))[2:],\n regexp_match_pl(E'foo!\\n!bar', '(\\W+)', '')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{!} | {\"! +| {\"! +\n | !\"} | !\"}\n(1 row)\n\nSELECT\n regexp_match_pg(E'foo!\\n!bar', '(\\W+)', ''),\n (regexp_match_v8(E'foo!\\n!bar', '(\\W+)', 's'))[2:],\n regexp_match_pl(E'foo!\\n!bar', '(\\W+)', 's')\n;\nregexp_match_pg | regexp_match_v8 | regexp_match_pl\n-----------------+-----------------+-----------------\n{\"! +| {\"! +| {\"! +\n!\"} | !\"} | !\"}\n(1 row)\n\n/Joel", "msg_date": "Sat, 20 Feb 2021 10:19:04 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On 02/19/21 10:26, Tom Lane wrote:\n>> \"foo\\nbar\".match(/([\\w\\W]+)/)[1];\n>> \"foo\n>> bar\"\n> \n> Oooh, that's very interesting. I guess the advantage of that over using\n> the 's' flag is that you can have different behaviors at different places\n> in the same regex.\n\n\nPerl, Python, and Java (at least) all have a common syntax for changing\nflags locally in a non-capturing group, so you could just match (?s:.)\n-- which I guess isn't any shorter than [\\w\\W] but makes the intent more\nclear.\n\nI see that JavaScript, for some reason, does not advertise that. We don't\neither; we have (?:groups) without flags, and we have (?flags) but only\nglobal at the start of the regex. Would it be worthwhile to jump on the\nbandwagon and support local flags in groups?\n\nWe currently give 2201B: invalid regular expression: invalid embedded option\non an attempt to use the syntax, so implementing it couldn't break anything\nsomeone is already doing.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 20 Feb 2021 18:31:39 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 02/19/21 10:26, Tom Lane wrote:\n>> Oooh, that's very interesting. I guess the advantage of that over using\n>> the 's' flag is that you can have different behaviors at different places\n>> in the same regex.\n\n> Perl, Python, and Java (at least) all have a common syntax for changing\n> flags locally in a non-capturing group, so you could just match (?s:.)\n> -- which I guess isn't any shorter than [\\w\\W] but makes the intent more\n> clear.\n\nHmm, interesting.\n\n> I see that JavaScript, for some reason, does not advertise that. We don't\n> either; we have (?:groups) without flags, and we have (?flags) but only\n> global at the start of the regex. Would it be worthwhile to jump on the\n> bandwagon and support local flags in groups?\n\nYeah, perhaps. Not sure whether there are any built-in assumptions about\nthese flags holding still throughout the regex; that'd require some\nreview. But it seems like it could be a useful feature, and I don't\nsee any argument why we shouldn't have it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Feb 2021 20:13:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Hi,\n\nOne of the recent commits have introduce a new warning with gcc 10, when\nbuilding with optimizations:\n\nIn file included from /home/andres/src/postgresql/src/backend/regex/regcomp.c:2304:\n/home/andres/src/postgresql/src/backend/regex/regc_nfa.c: In function ‘checkmatchall’:\n/home/andres/src/postgresql/src/backend/regex/regc_nfa.c:3087:20: warning: array subscript -1 is outside array bounds of ‘_Bool[257]’ [-Warray-bounds]\n 3087 | hasmatch[depth] = true;\n | ^\n/home/andres/src/postgresql/src/backend/regex/regc_nfa.c:2920:8: note: while referencing ‘hasmatch’\n 2920 | bool hasmatch[DUPINF + 1];\n | ^~~~~~~~\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Feb 2021 09:34:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One of the recent commits have introduce a new warning with gcc 10, when\n> building with optimizations:\n\n> In file included from /home/andres/src/postgresql/src/backend/regex/regcomp.c:2304:\n> /home/andres/src/postgresql/src/backend/regex/regc_nfa.c: In function ‘checkmatchall’:\n> /home/andres/src/postgresql/src/backend/regex/regc_nfa.c:3087:20: warning: array subscript -1 is outside array bounds of ‘_Bool[257]’ [-Warray-bounds]\n> 3087 | hasmatch[depth] = true;\n> | ^\n\nHmph. There's an \"assert(depth >= 0)\" immediately in front of that,\nso I'm not looking too kindly on the compiler thinking it's smarter\nthan I am. Do you have a suggestion on how to shut it up?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Feb 2021 12:39:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "I wrote:\n> Hmph. There's an \"assert(depth >= 0)\" immediately in front of that,\n> so I'm not looking too kindly on the compiler thinking it's smarter\n> than I am. Do you have a suggestion on how to shut it up?\n\nOn reflection, maybe the thing to do is convert the assert into\nan always-on check, \"if (depth < 0) return false\". The assertion\nis essentially saying that there's no arc leading directly from\nthe pre state to the post state. Which there had better not be,\nor a lot of other stuff is going to go wrong; but I suppose there's\nno way to explain that to gcc. It is annoying to have to expend\nan always-on check for a can't-happen case, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Feb 2021 12:52:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Hi,\n\nOn 2021-02-23 12:52:28 -0500, Tom Lane wrote:\n> I wrote:\n> > Hmph. There's an \"assert(depth >= 0)\" immediately in front of that,\n> > so I'm not looking too kindly on the compiler thinking it's smarter\n> > than I am. Do you have a suggestion on how to shut it up?\n\ngcc can't see the assert though, in an non-cassert optimized build... If\nI force assertions to be used, the warning vanishes.\n\n\n> On reflection, maybe the thing to do is convert the assert into\n> an always-on check, \"if (depth < 0) return false\". The assertion\n> is essentially saying that there's no arc leading directly from\n> the pre state to the post state. Which there had better not be,\n> or a lot of other stuff is going to go wrong; but I suppose there's\n> no way to explain that to gcc. It is annoying to have to expend\n> an always-on check for a can't-happen case, though.\n\nWouldn't quite work like that because of the restrictions of what pg\ninfrastructure we want to expose the regex engine to, but a\n if (depth < 0)\n pg_unreachable();\nwould avoid the runtime overhead and does fix the warning.\n\nI have been wondering about making Asserts do something along those\nlines - but it'd need to be opt-in, since we clearly have a lot of\nassertions that would cost too much.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Feb 2021 10:05:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One of the recent commits have introduce a new warning with gcc 10, when\n> building with optimizations:\n\nOddly, I see no such warning with Fedora's current compiler,\ngcc version 10.2.1 20201125 (Red Hat 10.2.1-9) (GCC) \n\nAre you using any special compiler switches?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Feb 2021 13:09:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On 2021-02-23 13:09:18 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > One of the recent commits have introduce a new warning with gcc 10, when\n> > building with optimizations:\n>\n> Oddly, I see no such warning with Fedora's current compiler,\n> gcc version 10.2.1 20201125 (Red Hat 10.2.1-9) (GCC)\n>\n> Are you using any special compiler switches?\n\nA few. At first I didn't see any relevant ones - but I think it's just\nthat you need to use -O3 instead of -O2.\n\nandres@awork3:~/build/postgres/dev-optimize/vpath$ (cd src/backend/regex/ && ccache gcc-10 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -I../../../src/include -I/home/andres/src/postgresql/src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o regcomp.o /home/andres/src/postgresql/src/backend/regex/regcomp.c -O2)\n\nandres@awork3:~/build/postgres/dev-optimize/vpath$ (cd src/backend/regex/ && ccache gcc-10 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -I../../../src/include -I/home/andres/src/postgresql/src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o regcomp.o /home/andres/src/postgresql/src/backend/regex/regcomp.c -O3)\nIn file included from /home/andres/src/postgresql/src/backend/regex/regcomp.c:2304:\n/home/andres/src/postgresql/src/backend/regex/regc_nfa.c: In function ‘checkmatchall’:\n/home/andres/src/postgresql/src/backend/regex/regc_nfa.c:3086:20: warning: array subscript -1 is outside array bounds of ‘_Bool[257]’ [-Warray-bounds]\n 3086 | hasmatch[depth] = true;\n | ^\n/home/andres/src/postgresql/src/backend/regex/regc_nfa.c:2920:8: note: while referencing ‘hasmatch’\n 2920 | bool hasmatch[DUPINF + 1];\n | ^~~~~~~~\n\nandres@awork3:~/build/postgres/dev-optimize/vpath$ gcc-10 --version\ngcc-10 (Debian 10.2.1-6) 10.2.1 20210110\nCopyright (C) 2020 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Feb 2021 10:18:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-02-23 12:52:28 -0500, Tom Lane wrote:\n>> ... It is annoying to have to expend\n>> an always-on check for a can't-happen case, though.\n\n> Wouldn't quite work like that because of the restrictions of what pg\n> infrastructure we want to expose the regex engine to, but a\n> if (depth < 0)\n> pg_unreachable();\n> would avoid the runtime overhead and does fix the warning.\n\nYeah, I still have dreams of someday converting the regex engine\ninto an independent project, so I don't want to make it depend on\npg_unreachable. I'll put in the low-tech fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Feb 2021 13:22:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-02-23 13:09:18 -0500, Tom Lane wrote:\n>> Oddly, I see no such warning with Fedora's current compiler,\n>> gcc version 10.2.1 20201125 (Red Hat 10.2.1-9) (GCC)\n>> Are you using any special compiler switches?\n\n> A few. At first I didn't see any relevant ones - but I think it's just\n> that you need to use -O3 instead of -O2.\n\nAh-hah, -O3 plus remembering to disable assertions makes it\nhappen here too. Will fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Feb 2021 13:36:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Here's another little piece of regex performance hacking. This is based\non looking at a slow regexp I found in Tcl's bug tracker:\n\n-- Adapted from http://core.tcl.tk/tcl/tktview?name=446565\nselect regexp_matches(\nrepeat('<script> 123 </script> <script> 345 </script> <script> 123 </script>',\n100000),\n'<script(.(?!</script>))*?(doubleclick|flycast|burstnet|spylog)+?.*?</script>');\n\nThe core of the problem here is the lookahead constraint (?!</script>),\nwhich gets applied O(N^2) times for an N-character data string. The\npresent patch doesn't do anything to cut down the big-O problem, but\nit does take a swipe at cutting the constant factor, which should\nremain useful even if we find a way to avoid the O(N^2) issue.\n\nPoking at this with perf, I was surprised to observe that the dominant\ncost is not down inside lacon() as one would expect, but in the loop\nin miss() that is deciding where to call lacon(). 80% of the runtime\nis going into these three lines:\n\n for (i = 0; i < d->nstates; i++)\n if (ISBSET(d->work, i))\n for (ca = cnfa->states[i]; ca->co != COLORLESS; ca++)\n\nSo there are two problems here. The outer loop is iterating over all\nthe NFA states, even though only a small fraction of the states are\nlikely to have LACON out-arcs. (In the case at hand, the main NFA\nhas 78 states, of which just one has LACON out-arcs.) Then, for\nevery reachable state, we're scanning all its out-arcs to find the\nones that are LACONs. (Again, just a fraction of the out-arcs are\nlikely to be LACONs.) So the main thrust of this patch is to rearrange\nthe \"struct cnfa\" representation to separate plain arcs from LACON\narcs, allowing this loop to not waste time looking at irrelevant\nstates or arcs. This also saves some time in miss()'s preceding\nmain loop, which is only interested in plain arcs. Splitting the\nLACON arcs from the plain arcs complicates matters in a couple of\nother places, but none of them are in the least performance-critical.\n\nThe other thing I noticed while looking at miss() is that it will\ncall lacon() for each relevant arc, even though it's quite likely\nto see multiple arcs labeled with the same constraint number,\nfor which the answer must be the same. So I added some simple\nlogic to cache the last answer and re-use it if the next arc of\ninterest has the same color. (We could imagine working harder\nto cache in the presence of multiple interesting LACONs, but I'm\ndoubtful that it's worth the trouble. The one-entry cache logic\nis so simple it can hardly be a net loss, though.)\n\nOn my machine, the combination of these two ideas reduces the\nruntime of the example above from ~150 seconds to ~53 seconds,\nor nearly 3x better. I see something like a 2% improvement on\nJoel's test corpus, which might just be noise. So this isn't\nany sort of universal panacea, but it sure helps when LACON\nevaluation is the bottleneck.\n\nAny objections? or better ideas?\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 23 Feb 2021 21:32:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "I wrote:\n> On my machine, the combination of these two ideas reduces the\n> runtime of the example above from ~150 seconds to ~53 seconds,\n> or nearly 3x better. I see something like a 2% improvement on\n> Joel's test corpus, which might just be noise. So this isn't\n> any sort of universal panacea, but it sure helps when LACON\n> evaluation is the bottleneck.\n\nAfter another round of testing, I really can't see any improvement\nat all from that patch on anything except the original Tcl test\ncase. Indeed, a lot of cases seem very slightly worse, perhaps\nbecause compact() now has to make two passes over all the arcs.\nSo that's leaving me a bit dissatisfied with it; I'm going to\nstick it on the back burner for now, in hopes of a better idea.\n\nHowever, in a different line of thought, I realized that the\nmemory allocation logic could use some polishing. It gives out\nten arcs per NFA state initially, and then adds ten more at a time.\nHowever, that's not very bright when you look at the actual usage\npatterns, because most states have only one or two out-arcs,\nbut some have lots and lots. I instrumented things to gather\nstats about arcs-per-state on your larger corpus, and I got this,\nwhere the seond column is the total fraction of states having\nthe given number of arcs or fewer:\n\n arcs | cum_fraction \n------+------------------------\n 0 | 0.03152871318455725868\n 1 | 0.55852399556959499493\n 2 | 0.79408539124378449284\n 3 | 0.86926656199366447221\n 4 | 0.91726891675794579062\n 5 | 0.92596934405572457792\n 6 | 0.93491612836055807037\n 7 | 0.94075102352639209644\n 8 | 0.94486598829672779379\n 9 | 0.94882085883928361399\n 10 | 0.95137992908336444821\n 11 | 0.95241399914559696173\n 12 | 0.95436547669138874594\n 13 | 0.95534682472329051385\n 14 | 0.95653340893356523452\n 15 | 0.95780804864876924571\n 16 | 0.95902387577636979702\n 17 | 0.95981494467267418552\n 18 | 0.96048662216159976997\n 19 | 0.96130294229052153065\n 20 | 0.96196856160309755204\n...\n 3238 | 0.99999985870142624926\n 3242 | 0.99999987047630739515\n 4095 | 0.99999987342002768163\n 4535 | 0.99999987930746825457\n 4642 | 0.99999988225118854105\n 4706 | 0.99999989402606968694\n 5890 | 0.99999989696978997342\n 6386 | 0.99999990874467111931\n 7098 | 0.99999991168839140579\n 7751 | 0.99999994701303484347\n 7755 | 0.99999998233767828116\n 7875 | 0.99999998822511885410\n 8049 | 1.00000000000000000000\n\nSo it seemed clear to me that we should only give out a couple of arcs\nper state initially, but then let it ramp up faster than 10 arcs per\nadditional malloc. After a bit of fooling I have the attached.\nThis does nothing for the very largest examples in the corpus (the\nones that cause \"regex too complex\") --- those were well over the\nREG_MAX_COMPILE_SPACE limit before and they still are. But all the\nrest get nicely smaller. The average pg_regcomp memory consumption\ndrops from ~89K to ~48K.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 24 Feb 2021 18:19:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "I wrote:\n> However, in a different line of thought, I realized that the\n> memory allocation logic could use some polishing. It gives out\n> ten arcs per NFA state initially, and then adds ten more at a time.\n> However, that's not very bright when you look at the actual usage\n> patterns, because most states have only one or two out-arcs,\n> but some have lots and lots.\n\nHold the phone ... after a bit I started to wonder why Spencer made\narc allocation be per-state at all, rather than using one big pool\nof arcs. Maybe there's some locality-of-reference argument to be\nmade for that, but I doubt he was worrying about that back in the\n90s. Besides, the regex compiler spends a lot of time iterating\nover in-chains and color-chains, not just out-chains; it's hard\nto see why trying to privilege the latter case would help much.\n\nWhat I suspect, based on this old comment in regguts.h:\n * Having a \"from\" pointer within each arc may seem redundant, but it\n * saves a lot of hassle.\nis that Henry did it like this initially to save having a \"from\"\npointer in each arc, and never re-thought the allocation mechanism\nafter he gave up on that idea.\n\nSo I rearranged things to allocate arcs out of a common pool, and for\ngood measure made the state allocation code do the same thing. I was\npretty much blown away by the results: not only is the average-case\nspace usage about half what it is on HEAD, but the worst-case drops\nby well more than a factor of ten. I'd previously found, by raising\nREG_MAX_COMPILE_SPACE, that the regexes in the second corpus that\ntrigger \"regex too complex\" errors all need 300 to 360 MB to compile\nwith our HEAD code. With the new patch attached, they compile\nsuccessfully in a dozen or so MB. (Yesterday's patch really did\nnothing at all for these worst-case regexes, BTW.)\n\nI also see about a 10% speedup overall, which I'm pretty sure is\ndown to needing fewer interactions with malloc() (this is partially\na function of having batched the state allocations, of course).\nSo even if there is a locality-of-reference loss, it's swamped by\nfewer mallocs and less total space used.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 25 Feb 2021 19:16:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "I wrote:\n> So I rearranged things to allocate arcs out of a common pool, and for\n> good measure made the state allocation code do the same thing. I was\n> pretty much blown away by the results: not only is the average-case\n> space usage about half what it is on HEAD, but the worst-case drops\n> by well more than a factor of ten.\n\nBTW, I was initially a bit baffled by how this could be. Per previous\nmeasurements, the average number of arcs per state is around 4; so\nif the code is allocating ten arcs for each state right off the bat,\nit's pretty clear how we could have a factor-of-two-or-so bloat\nproblem. And I think that does explain the average-case results.\nBut it can't possibly explain bloats of more than 10x.\n\nAfter further study I think this is what explains it:\n\n* The \"average number of arcs\" is pretty misleading, because in\n a large NFA some of the states have hundreds of out-arcs, while\n most have only a couple.\n\n* The NFA is not static; the code moves arcs around all the time.\n There's actually a function (moveouts) that deletes all the\n out-arcs of a state and creates images of them on another state.\n That operation can be invoked a lot of times during NFA optimization.\n\n* Once a given state has acquired N out-arcs, it keeps that pool\n of arc storage, even if some or all of those arcs get deleted.\n Indeed, the state itself could be dropped and later recycled,\n but it still keeps its arc pool. Unfortunately, even if it does\n get recycled for re-use, it's likely to be resurrected as a state\n with only a couple of out-arcs.\n\nSo I think the explanation for 20x or 30x bloat arises from the\noptimize pass resulting in having a bunch of states that have large\nbut largely unused arc pools. Getting rid of the per-state arc pools\nin favor of one common pool fixes that nicely.\n\nI realized while looking at this that some cycles could be shaved\nfrom moveouts, because there's no longer a reason why it can't just\nscribble on the arcs in-place (cf. now-obsolete comment on\nchangearctarget()). It's late but I'll see about improving that\ntomorrow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Feb 2021 22:51:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 26, 2021, at 01:16, Tom Lane wrote:\n> 0007-smarter-regex-allocation-2.patch\n\nI've successfully tested this patch.\n\nI had to re-create the performance_test table\nsince some cases the previously didn't give an error,\nnow gives error \"invalid regular expression: invalid character range\".\nThis is expected and of course an improvement,\nbut just wanted to explain why the number of rows\ndon't match the previous test runs.\n\nCREATE TABLE performance_test AS\nSELECT\n subjects.subject,\n patterns.pattern,\n patterns.flags,\n tests.is_match,\n tests.captured\nFROM tests\nJOIN subjects ON subjects.subject_id = tests.subject_id\nJOIN patterns ON patterns.pattern_id = subjects.pattern_id\nWHERE tests.error IS NULL\n--\n-- the below part is added to ignore cases\n-- that now results in error:\n--\nAND NOT EXISTS (\n SELECT 1 FROM deviations\n WHERE deviations.test_id = tests.test_id\n AND deviations.error IS NOT NULL\n);\nSELECT 3253889\n\nComparing 13.2 with HEAD,\nnot a single test resulted in a different is_match value,\ni.e. the test just using the ~ regex operator,\nto only check if it matches or not. Good.\n\nSELECT COUNT(*)\nFROM deviations\nJOIN tests ON tests.test_id = deviations.test_id\nWHERE tests.is_match <> deviations.is_match\n\ncount\n-------\n 0\n(1 row)\n\nThe below query shows a frequency count per error message:\n\nSELECT error, COUNT(*)\nFROM deviations\nGROUP BY 1\nORDER BY 2 DESC\n\n error | count\n-----------------------------------------------------+--------\n | 106173\nregexp_match() does not support the \"global\" option | 5799\ninvalid regular expression: invalid character range | 1060\ninvalid regular expression option: \"y\" | 277\n(4 rows)\n\nAs we can see, 106173 cases now goes through without an error,\nthat previously gave an error. This is thanks to now allowing escape\nsequences within bracket expressions.\n\nThe other errors are expected and all good.\n\nEnd of correctness analysis. Now let's look at performance!\nI reran the same query three times to get a feeling for the stddev.\n\n\\timing\n\nSELECT\n is_match <> (subject ~ pattern),\n captured IS DISTINCT FROM regexp_match(subject, pattern, flags),\n COUNT(*)\nFROM performance_test\nGROUP BY 1,2\nORDER BY 1,2;\n\n?column? | ?column? | count\n----------+----------+---------\nf | f | 3253889\n(1 row)\n\nHEAD (b3a9e9897ec702d56602b26a8cdc0950f23b29dc)\nTime: 125938.747 ms (02:05.939)\nTime: 125414.792 ms (02:05.415)\nTime: 126185.496 ms (02:06.185)\n\nHEAD (b3a9e9897ec702d56602b26a8cdc0950f23b29dc)+0007-smarter-regex-allocation-2.patch\n\n?column? | ?column? | count\n----------+----------+---------\nf | f | 3253889\n(1 row)\n\nTime: 89145.030 ms (01:29.145)\nTime: 89083.210 ms (01:29.083)\nTime: 89166.442 ms (01:29.166)\n\nThat's a 29% speed-up compared to HEAD! Truly amazing.\n\nLet's have a look at the total speed-up compared to PostgreSQL 13.\n\nIn my previous benchmarks testing against old versions,\nI used precompiled binaries, but this time I compiled REL_13_STABLE:\n\nTime: 483390.132 ms (08:03.390)\n\nThat's a 82% speed-up in total! Amazing!\n\n/Joel\nHi,On Fri, Feb 26, 2021, at 01:16, Tom Lane wrote:0007-smarter-regex-allocation-2.patchI've successfully tested this patch.I had to re-create the performance_test tablesince some cases the previously didn't give an error,now gives error \"invalid regular expression: invalid character range\".This is expected and of course an improvement,but just wanted to explain why the number of rowsdon't match the previous test runs.CREATE TABLE performance_test ASSELECT  subjects.subject,  patterns.pattern,  patterns.flags,  tests.is_match,  tests.capturedFROM testsJOIN subjects ON subjects.subject_id = tests.subject_idJOIN patterns ON patterns.pattern_id = subjects.pattern_idWHERE tests.error IS NULL---- the below part is added to ignore cases-- that now results in error:--AND NOT EXISTS (  SELECT 1 FROM deviations  WHERE deviations.test_id = tests.test_id  AND deviations.error IS NOT NULL);SELECT 3253889Comparing 13.2 with HEAD, not a single test resulted in a different is_match value,i.e. the test just using the ~ regex operator,to only check if it matches or not. Good.SELECT COUNT(*)FROM deviationsJOIN tests ON tests.test_id = deviations.test_idWHERE tests.is_match <> deviations.is_matchcount-------     0(1 row)The below query shows a frequency count per error message:SELECT error, COUNT(*)FROM deviationsGROUP BY 1ORDER BY 2 DESC                        error                        | count-----------------------------------------------------+--------                                                     | 106173regexp_match() does not support the \"global\" option |   5799invalid regular expression: invalid character range |   1060invalid regular expression option: \"y\"              |    277(4 rows)As we can see, 106173 cases now goes through without an error,that previously gave an error. This is thanks to now allowing escapesequences within bracket expressions.The other errors are expected and all good.End of correctness analysis. Now let's look at performance!I reran the same query three times to get a feeling for the stddev.\\timingSELECT  is_match <> (subject ~ pattern),  captured IS DISTINCT FROM regexp_match(subject, pattern, flags),  COUNT(*)FROM performance_testGROUP BY 1,2ORDER BY 1,2;?column? | ?column? |  count----------+----------+---------f        | f        | 3253889(1 row)HEAD (b3a9e9897ec702d56602b26a8cdc0950f23b29dc)Time: 125938.747 ms (02:05.939)Time: 125414.792 ms (02:05.415)Time: 126185.496 ms (02:06.185)HEAD (b3a9e9897ec702d56602b26a8cdc0950f23b29dc)+0007-smarter-regex-allocation-2.patch?column? | ?column? |  count----------+----------+---------f        | f        | 3253889(1 row)Time: 89145.030 ms (01:29.145)Time: 89083.210 ms (01:29.083)Time: 89166.442 ms (01:29.166)That's a 29% speed-up compared to HEAD! Truly amazing.Let's have a look at the total speed-up compared to PostgreSQL 13.In my previous benchmarks testing against old versions,I used precompiled binaries, but this time I compiled REL_13_STABLE:Time: 483390.132 ms (08:03.390)That's a 82% speed-up in total! Amazing!/Joel", "msg_date": "Fri, 26 Feb 2021 17:42:32 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Fri, Feb 26, 2021, at 01:16, Tom Lane wrote:\n>> 0007-smarter-regex-allocation-2.patch\n\n> I've successfully tested this patch.\n\nCool, thanks for testing!\n\n> That's a 29% speed-up compared to HEAD! Truly amazing.\n\nHmm, I'm still only seeing about 10% or a little better.\nI wonder why the difference in your numbers. Either way,\nthough, I'll take it, since the main point here is to cut\nmemory consumption and not so much cycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Feb 2021 13:55:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Fri, Feb 26, 2021, at 19:55, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > On Fri, Feb 26, 2021, at 01:16, Tom Lane wrote:\n> >> 0007-smarter-regex-allocation-2.patch\n> \n> > I've successfully tested this patch.\n> \n> Cool, thanks for testing!\n\nI thought it would be interesting to see if any differences\nin *where* matches occur not only *what* matches.\n\nI've compared the output from regexp_positions()\nbetween REL_13_STABLE and HEAD.\n\nI'm happy to report no differences were found,\nexcept some new expected\n\n invalid regular expression: invalid character range\n\nerrors due to the fixes.\n\nThis time I also ran into the\n\n ([\"'`])(?:\\\\\\1|.)*?\\1\n\npattern due to using the flags,\nwhich caused a timeout on REL_13_STABLE,\nbut the same pattern is fast on HEAD.\n\nAll good.\n\n/Joel\nOn Fri, Feb 26, 2021, at 19:55, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> On Fri, Feb 26, 2021, at 01:16, Tom Lane wrote:>> 0007-smarter-regex-allocation-2.patch> I've successfully tested this patch.Cool, thanks for testing!I thought it would be interesting to see if any differencesin *where* matches occur not only *what* matches.I've compared the output from regexp_positions()between REL_13_STABLE and HEAD.I'm happy to report no differences were found,except some new expected    invalid regular expression: invalid character rangeerrors due to the fixes.This time I also ran into the    ([\"'`])(?:\\\\\\1|.)*?\\1pattern due to using the flags,which caused a timeout on REL_13_STABLE,but the same pattern is fast on HEAD.All good./Joel", "msg_date": "Sat, 06 Mar 2021 06:03:30 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" }, { "msg_contents": "On Sat, Feb 13, 2021 at 06:19:34PM +0100, Joel Jacobson wrote:\n> To test the correctness of the patches,\n> I thought it would be nice with some real-life regexes,\n> and just as important, some real-life text strings,\n> to which the real-life regexes are applied to.\n> \n> I therefore patched Chromium's v8 regexes engine,\n> to log the actual regexes that get compiled when\n> visiting websites, and also the text strings that\n> are the regexes are applied to during run-time\n> when the regexes are executed.\n> \n> I logged the regex and text strings as base64 encoded\n> strings to STDOUT, to make it easy to grep out the data,\n> so it could be imported into PostgreSQL for analytics.\n> \n> In total, I scraped the first-page of some ~50k websites,\n> which produced 45M test rows to import,\n> which when GROUP BY pattern and flags was reduced\n> down to 235k different regex patterns,\n> and 1.5M different text string subjects.\n\nIt's great to see this kind of testing. Thanks for doing it.\n\n\n", "msg_date": "Sat, 6 Mar 2021 10:09:25 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Some regular-expression performance hacking" } ]
[ { "msg_contents": "A few more easy tests for things not covered at all:\n\n bytea LIKE bytea (bytealike)\n bytea NOT LIKE bytea (byteanlike)\n ESCAPE clause for the above (like_escape_bytea)\n\nalso\n\n name NOT ILIKE text (nameicnlike)\n\nSee also \n<https://coverage.postgresql.org/src/backend/utils/adt/like.c.func-sort-c.html>.", "msg_date": "Thu, 11 Feb 2021 14:23:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Add tests for bytea LIKE operator" } ]
[ { "msg_contents": "Greetings\n\nWe have following syntax:\n\n ALTER THING name [ NO ] DEPENDS ON EXTENSION name\n\nfor the following THINGs:\n\n- ALTER TRIGGER\n- ALTER FUNCTION\n- ALTER PROCEDURE\n- ALTER ROUTINE\n- ALTER MATERIALIZED VIEW\n- ALTER INDEX\n\nIn the documentation, the \"[ NO ]\" option is listed in the synopsis for\nALTER TRIGGER and ALTER FUNCTION, but not the others.\nTrivial patch attached.\n\nWill add to next CF.\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Fri, 12 Feb 2021 10:32:14 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "[DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" }, { "msg_contents": "On Fri, Feb 12, 2021 at 10:32:14AM +0900, Ian Lawrence Barwick wrote:\n> In the documentation, the \"[ NO ]\" option is listed in the synopsis for\n> ALTER TRIGGER and ALTER FUNCTION, but not the others.\n> Trivial patch attached.\n\nThere are two flavors to cover for 6 commands per gram.y, and you are\ncovering all of them. So this looks good to me. I'll apply and\nbackpatch in a bit. It is worth noting that tab-complete.c does a bad\njob in completing those clauses.\n--\nMichael", "msg_date": "Sat, 13 Feb 2021 11:52:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" }, { "msg_contents": "2021年2月13日(土) 11:52 Michael Paquier <michael@paquier.xyz>:\n\n> On Fri, Feb 12, 2021 at 10:32:14AM +0900, Ian Lawrence Barwick wrote:\n> > In the documentation, the \"[ NO ]\" option is listed in the synopsis for\n> > ALTER TRIGGER and ALTER FUNCTION, but not the others.\n> > Trivial patch attached.\n>\n> There are two flavors to cover for 6 commands per gram.y, and you are\n> covering all of them. So this looks good to me. I'll apply and\n> backpatch in a bit. It is worth noting that tab-complete.c does a bad\n> job in completing those clauses.\n> --\n> Michael\n>\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年2月13日(土) 11:52 Michael Paquier <michael@paquier.xyz>:On Fri, Feb 12, 2021 at 10:32:14AM +0900, Ian Lawrence Barwick wrote:\n> In the documentation, the \"[ NO ]\" option is listed in the synopsis for\n> ALTER TRIGGER and ALTER FUNCTION, but not the others.\n> Trivial patch attached.\n\nThere are two flavors to cover for 6 commands per gram.y, and you are\ncovering all of them.  So this looks good to me.  I'll apply and\nbackpatch in a bit.  It is worth noting that tab-complete.c does a bad\njob in completing those clauses.\n--\nMichael\n-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Mon, 15 Feb 2021 15:52:31 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" }, { "msg_contents": "2021年2月13日(土) 11:52 Michael Paquier <michael@paquier.xyz>:\n\n> On Fri, Feb 12, 2021 at 10:32:14AM +0900, Ian Lawrence Barwick wrote:\n> > In the documentation, the \"[ NO ]\" option is listed in the synopsis for\n> > ALTER TRIGGER and ALTER FUNCTION, but not the others.\n> > Trivial patch attached.\n>\n> There are two flavors to cover for 6 commands per gram.y, and you are\n> covering all of them. So this looks good to me. I'll apply and\n> backpatch in a bit.\n\n\nThanks! (Apologies for the preceding blank mail).\n\nIt is worth noting that tab-complete.c does a bad\n> job in completing those clauses.\n>\n\nIndeed it does. Not the most exciting of use cases, though I imagine it\nmight come in handy for anyone developing an extension, and the\nexisting implementation is inconsistent (in place for ALTER INDEX,\nand partially for ALTER MATERIALIZED VIEW, but not the others).\nPatch suggestion attached.\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com", "msg_date": "Mon, 15 Feb 2021 15:57:04 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" }, { "msg_contents": "On Mon, Feb 15, 2021 at 03:57:04PM +0900, Ian Lawrence Barwick wrote:\n> Indeed it does. Not the most exciting of use cases, though I imagine it\n> might come in handy for anyone developing an extension, and the\n> existing implementation is inconsistent (in place for ALTER INDEX,\n> and partially for ALTER MATERIALIZED VIEW, but not the others).\n> Patch suggestion attached.\n\nThanks.\n\n- else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"NO\", \"DEPENDS\"))\n- COMPLETE_WITH(\"ON EXTENSION\");\n- else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"DEPENDS\"))\n- COMPLETE_WITH(\"ON EXTENSION\");\nThe part, if removed, means that typing \"alter index my_index no \" is\nnot able to complete with \"DEPENDS ON EXTENSION\" anymore. So it seems\nto me that ALTER INDEX got that right, and that the other commands had\nbetter do the same.\n--\nMichael", "msg_date": "Tue, 16 Feb 2021 10:20:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" }, { "msg_contents": "2021年2月16日(火) 10:20 Michael Paquier <michael@paquier.xyz>:\n\n> On Mon, Feb 15, 2021 at 03:57:04PM +0900, Ian Lawrence Barwick wrote:\n> > Indeed it does. Not the most exciting of use cases, though I imagine it\n> > might come in handy for anyone developing an extension, and the\n> > existing implementation is inconsistent (in place for ALTER INDEX,\n> > and partially for ALTER MATERIALIZED VIEW, but not the others).\n> > Patch suggestion attached.\n>\n> Thanks.\n>\n> - else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"NO\", \"DEPENDS\"))\n> - COMPLETE_WITH(\"ON EXTENSION\");\n> - else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"DEPENDS\"))\n> - COMPLETE_WITH(\"ON EXTENSION\");\n> The part, if removed, means that typing \"alter index my_index no \" is\n> not able to complete with \"DEPENDS ON EXTENSION\" anymore. So it seems\n> to me that ALTER INDEX got that right, and that the other commands had\n> better do the same.\n>\n\nHmm, with the current implementation \"alter index my_index no <TAB>\"\ndoesn't work\nanyway; you'd need to add this before the above lines:\n\n+ else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"NO\"))\n+ COMPLETE_WITH(\"DEPENDS\");\n\nso AFAICT the patch doesn't change that behaviour. It does mean \"alter index\nmy_index no depends <TAB>\" no longer completes to \"ON EXTENSION\", but if\nyou've\ntyped one of \"NO\" or \"DEPENDS\" in that context, \"ON EXTENSION\" is the only\ncompletion so I'm not sure what's gained by forcing the user to hit TAB\ntwice.\n\nThere are quite a few tab completions consisting of more than one word\n(e.g. \"MATERIALIZED VIEW\", \"FORCE ROW LEVEL SECURITY\") where tab completion\nis\nineffective after the first word followed by a space, e.g. \"alter\nmaterialized\n<TAB>\" doesn't result in any expansion either. I suppose we could go\nthrough all\nthose and handle each word individually, but presumably there's a reason why\nthat hasn't been done already (maybe no-one has complained?).\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n2021年2月16日(火) 10:20 Michael Paquier <michael@paquier.xyz>:On Mon, Feb 15, 2021 at 03:57:04PM +0900, Ian Lawrence Barwick wrote:\n> Indeed it does. Not the most exciting of use cases, though I imagine it\n> might come in handy for anyone developing an extension, and the\n> existing implementation is inconsistent (in place for ALTER INDEX,\n> and partially for ALTER MATERIALIZED VIEW, but not the others).\n> Patch suggestion attached.\n\nThanks.\n\n-   else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"NO\", \"DEPENDS\"))\n-       COMPLETE_WITH(\"ON EXTENSION\");\n-   else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"DEPENDS\"))\n-       COMPLETE_WITH(\"ON EXTENSION\");\nThe part, if removed, means that typing \"alter index my_index no \" is\nnot able to complete with \"DEPENDS ON EXTENSION\" anymore.  So it seems\nto me that ALTER INDEX got that right, and that the other commands had\nbetter do the same.Hmm, with the current implementation \"alter index my_index no <TAB>\" doesn't workanyway; you'd need to add this before the above lines:+       else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"NO\"))+               COMPLETE_WITH(\"DEPENDS\");so AFAICT the patch doesn't change that behaviour. It does mean \"alter indexmy_index no depends <TAB>\" no longer completes to \"ON EXTENSION\", but if you'vetyped one of \"NO\" or \"DEPENDS\" in that context, \"ON EXTENSION\" is the onlycompletion so I'm not sure what's gained by forcing the user to hit TABtwice.There are quite a few tab completions consisting of more than one word(e.g. \"MATERIALIZED VIEW\", \"FORCE ROW LEVEL SECURITY\") where tab completion isineffective after the first word followed by a space, e.g. \"alter materialized<TAB>\" doesn't result in any expansion either. I suppose we could go through allthose and handle each word individually, but presumably there's a reason whythat hasn't been done already (maybe no-one has complained?).RegardsIan Barwick-- EnterpriseDB: https://www.enterprisedb.com", "msg_date": "Tue, 16 Feb 2021 11:18:47 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" }, { "msg_contents": "On Tue, Feb 16, 2021 at 11:18:47AM +0900, Ian Lawrence Barwick wrote:\n> Hmm, with the current implementation \"alter index my_index no <TAB>\"\n> doesn't work\n> anyway; you'd need to add this before the above lines:\n> \n> + else if (Matches(\"ALTER\", \"INDEX\", MatchAny, \"NO\"))\n> + COMPLETE_WITH(\"DEPENDS\");\n> \n> so AFAICT the patch doesn't change that behaviour. It does mean \"alter index\n> my_index no depends <TAB>\" no longer completes to \"ON EXTENSION\", but if\n> you've\n> typed one of \"NO\" or \"DEPENDS\" in that context, \"ON EXTENSION\" is the only\n> completion so I'm not sure what's gained by forcing the user to hit TAB\n> twice.\n\nYou are right. It looks like I have tested without a whitespace after\nthe \"NO\". With a whitespace it does not work, so that looks like a\ncomplication for little gain. Another problem with the code on HEAD\nis that you would not complete properly \"NO DEPENDS ON\", so that feels\nhalf-completed.\n\n> There are quite a few tab completions consisting of more than one word\n> (e.g. \"MATERIALIZED VIEW\", \"FORCE ROW LEVEL SECURITY\") where tab completion\n> is\n> ineffective after the first word followed by a space, e.g. \"alter\n> materialized\n> <TAB>\" doesn't result in any expansion either. I suppose we could go\n> through all\n> those and handle each word individually, but presumably there's a reason why\n> that hasn't been done already (maybe no-one has complained?).\n\nBecause that's just extra maintenance as most people will just\ncomplete after typing the first set of characters? This part got\ndiscussed as of 1e324cb:\nhttps://www.postgresql.org/message-id/CALtqXTcogrFEVP9uou5vFtnGsn+vHZUu9+9a0inarfYVOHScYQ@mail.gmail.com\n\nAnyway, after sleeping on it, I have just applied your original patch\nas that's simpler, and will cover the cases people would care for.\n--\nMichael", "msg_date": "Wed, 17 Feb 2021 12:00:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [DOC] add missing \"[ NO ]\" to various \"DEPENDS ON\" synopses" } ]
[ { "msg_contents": "ts=# \\errverbose \nERROR: XX000: invalid memory alloc request size 18446744073709551613\n\n#0 pg_re_throw () at elog.c:1716\n#1 0x0000000000a33b12 in errfinish (filename=0xbff20e \"mcxt.c\", lineno=959, funcname=0xbff2db <__func__.6684> \"palloc\") at elog.c:502\n#2 0x0000000000a6760d in palloc (size=18446744073709551613) at mcxt.c:959\n#3 0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212\n#4 0x00000000009fbf05 in textout (fcinfo=0x2094538) at varlena.c:557\n#5 0x00000000006bdd50 in ExecInterpExpr (state=0x2093990, econtext=0x20933d8, isnull=0x7fff5bf04a87) at execExprInterp.c:1112\n#6 0x00000000006d4f18 in ExecEvalExprSwitchContext (state=0x2093990, econtext=0x20933d8, isNull=0x7fff5bf04a87) at ../../../src/include/executor/executor.h:316\n#7 0x00000000006d4f81 in ExecProject (projInfo=0x2093988) at ../../../src/include/executor/executor.h:350\n#8 0x00000000006d5371 in ExecScan (node=0x20932c8, accessMtd=0x7082e0 <SeqNext>, recheckMtd=0x708385 <SeqRecheck>) at execScan.c:238\n#9 0x00000000007083c2 in ExecSeqScan (pstate=0x20932c8) at nodeSeqscan.c:112\n#10 0x00000000006d1b00 in ExecProcNodeInstr (node=0x20932c8) at execProcnode.c:466\n#11 0x00000000006e742c in ExecProcNode (node=0x20932c8) at ../../../src/include/executor/executor.h:248\n#12 0x00000000006e77de in ExecAppend (pstate=0x2089208) at nodeAppend.c:267\n#13 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2089208) at execProcnode.c:466\n#14 0x000000000070964f in ExecProcNode (node=0x2089208) at ../../../src/include/executor/executor.h:248\n#15 0x0000000000709795 in ExecSort (pstate=0x2088ff8) at nodeSort.c:108\n#16 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088ff8) at execProcnode.c:466\n#17 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088ff8) at execProcnode.c:450\n#18 0x00000000006dec36 in ExecProcNode (node=0x2088ff8) at ../../../src/include/executor/executor.h:248\n#19 0x00000000006df079 in fetch_input_tuple (aggstate=0x2088a20) at nodeAgg.c:589\n#20 0x00000000006e1fad in agg_retrieve_direct (aggstate=0x2088a20) at nodeAgg.c:2368\n#21 0x00000000006e1bfd in ExecAgg (pstate=0x2088a20) at nodeAgg.c:2183\n#22 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088a20) at execProcnode.c:466\n#23 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088a20) at execProcnode.c:450\n#24 0x00000000006c6ffa in ExecProcNode (node=0x2088a20) at ../../../src/include/executor/executor.h:248\n#25 0x00000000006c966b in ExecutePlan (estate=0x2032f48, planstate=0x2088a20, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xbb3400 <donothingDR>, \n execute_once=true) at execMain.c:1632\n\n#3 0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212\n212 result = (char *) palloc(len + 1);\n\n(gdb) l\n207 /* must cast away the const, unfortunately */\n208 text *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));\n209 int len = VARSIZE_ANY_EXHDR(tunpacked);\n210 char *result;\n211\n212 result = (char *) palloc(len + 1);\n\n(gdb) p len\n$1 = -4\n\nThis VM had some issue early today and I killed the VM, causing PG to execute\nrecovery. I'm tentatively blaming that on zfs, so this could conceivably be a\ndata error (although recovery supposedly would have resolved it). I just\nchecked and data_checksums=off.\n\nThe query has mode(), string_agg(), distinct.\n\nHere's a redacted plan for the query:\n\n GroupAggregate (cost=15681340.44..20726393.56 rows=908609 width=618)\n Group Key: (((COALESCE(a.ii, $0) || lpad(a.ii, 5, '0'::text)) || lpad(a.ii, 5, '0'::text))), a.ii, (COALESCE(a.ii, $2)), (CASE (a.ii)::integer WHEN 1 THEN 'qq'::text WHEN 2 THEN 'qq'::text WHEN 3 THEN 'qq'::text WHEN 4 THEN 'qq'::text WHEN 5 THEN 'qq qq'::text WHEN 6 THEN 'qq-qq'::text ELSE a.ii END), (CASE WHEN (COALESCE(a.ii, $3) = substr(a.ii, 1, length(COALESCE(a.ii, $4)))) THEN 'qq qq'::text WHEN (hashed SubPlan 7) THEN 'qq qq'::text ELSE 'qq qq qq'::text END)\n InitPlan 1 (returns $0)\n -> Seq Scan on d\n InitPlan 3 (returns $2)\n -> Seq Scan on d d\n InitPlan 4 (returns $3)\n -> Seq Scan on d d\n InitPlan 5 (returns $4)\n -> Seq Scan on d d\n InitPlan 6 (returns $5)\n -> Seq Scan on d d\n -> Sort (cost=15681335.39..15704050.62 rows=9086093 width=313)\n Sort Key: (((COALESCE(a.ii, $0) || lpad(a.ii, 5, '0'::text)) || lpad(a.ii, 5, '0'::text))), a.ii, (COALESCE(a.ii, $2)), (CASE (a.ii)::integer WHEN 1 THEN 'qq'::text WHEN 2 THEN 'qq'::text WHEN 3 THEN 'qq'::text WHEN 4 THEN 'qq'::text WHEN 5 THEN 'qq qq'::text WHEN 6 THEN 'qq-qq'::text ELSE a.ii END), (CASE WHEN (COALESCE(a.ii, $3) = substr(a.ii, 1, length(COALESCE(a.ii, $4)))) THEN 'qq qq'::text WHEN (hashed SubPlan 7) THEN 'qq qq'::text ELSE 'qq qq qq'::text END)\n -> Append (cost=1.01..13295792.30 rows=9086093 width=313)\n -> Seq Scan on a a (cost=1.01..5689033.34 rows=3948764 width=328)\n Filter: ((ii >= '2021-02-10 00:00:00+10'::timestamp with time zone) AND (ii < '2021-02-11 00:00:00+10'::timestamp with time zone))\n SubPlan 7\n -> Seq Scan on d d (cost=0.00..1.01 rows=1 width=7)\n -> Seq Scan on b (cost=1.01..12.75 rows=1 width=417)\n Filter: ((ii >= '2021-02-10 00:00:00+10'::timestamp with time zone) AND (ii < '2021-02-11 00:00:00+10'::timestamp with time zone))\n SubPlan 11\n -> Seq Scan on d d (cost=0.00..1.01 rows=1 width=7)\n -> Seq Scan on c c (cost=1.01..7561315.74 rows=5137328 width=302)\n Filter: ((ii >= '2021-02-10 00:00:00+10'::timestamp with time zone) AND (ii < '2021-02-11 00:00:00+10'::timestamp with time zone))\n SubPlan 14\n -> Seq Scan on d d (cost=0.00..1.01 rows=1 width=7)\n\nI restored to a test cluster, but so far not able to reproduce the issue there,\nso I'm soliciting suggestions how to debug it further.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 11 Feb 2021 19:48:37 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg13.2: invalid memory alloc request size NNNN" }, { "msg_contents": "On Thu, Feb 11, 2021 at 07:48:37PM -0600, Justin Pryzby wrote:\n> #3 0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212\n> 212 result = (char *) palloc(len + 1);\n> \n> (gdb) l\n> 207 /* must cast away the const, unfortunately */\n> 208 text *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));\n> 209 int len = VARSIZE_ANY_EXHDR(tunpacked);\n> 210 char *result;\n> 211\n> 212 result = (char *) palloc(len + 1);\n> \n> (gdb) p len\n> $1 = -4\n\nI reproduced this with a simpler query:\n\nts=# explain analyze SELECT CASE rattype::integer WHEN NNNNN THEN '.......' END AS ra_type FROM t WHERE tm BETWEEN '2021-02-11 23:55' AND '2021-02-11 23:56';\n\n#0 pg_re_throw () at elog.c:1714\n#1 0x00000000008aa0f6 in errfinish (filename=<optimized out>, filename@entry=0xa3ff9e \"mcxt.c\", lineno=lineno@entry=959, funcname=funcname@entry=0xa400f8 <__func__.7429> \"palloc\") at elog.c:502\n#2 0x00000000008d2344 in palloc (size=18446744073709551613) at mcxt.c:959\n#3 0x00000000008819ab in text_to_cstring (t=0x2aaad43ad008) at varlena.c:212\n#4 0x0000000000629b1d in ExecInterpExpr (state=0x1df4350, econtext=0x1df34b8, isnull=<optimized out>) at execExprInterp.c:1112\n#5 0x0000000000636c22 in ExecEvalExprSwitchContext (isNull=0x7ffc2a0ed0d7, econtext=0x1df34b8, state=0x1df4350) at ../../../src/include/executor/executor.h:316\n#6 ExecProject (projInfo=0x1df4348) at ../../../src/include/executor/executor.h:350\n#7 ExecScan (node=<optimized out>, accessMtd=0x644170 <BitmapHeapNext>, recheckMtd=0x644b40 <BitmapHeapRecheck>) at execScan.c:238\n#8 0x0000000000633ef8 in ExecProcNodeInstr (node=0x1df32a8) at execProcnode.c:466\n#9 0x000000000062d192 in ExecProcNode (node=0x1df32a8) at ../../../src/include/executor/executor.h:248\n#10 ExecutePlan (execute_once=<optimized out>, dest=0x9fc360 <donothingDR>, direction=<optimized out>, numberTuples=0, sendTuples=true, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1df32a8,\n estate=0x1df3078) at execMain.c:1632\n\n(gdb) p (varattrib_1b)t\n$16 = {va_header = 8 '\\b', va_data = 0x7ffcf4bd4bb9 \"\\200}\\310\\324\\177\"}\n\n(gdb) p ((varattrib_4b)t)->va_4byte->va_header\n$22 = 3363667976\n\n(gdb) down\n#4 0x00000000009fbf05 in textout (fcinfo=0x2fd5e48) at varlena.c:557\n557 PG_RETURN_CSTRING(TextDatumGetCString(txt));\n(gdb) p *fcinfo \n$1 = {flinfo = 0x2fd5df8, context = 0x0, resultinfo = 0x0, fncollation = 0, isnull = false, nargs = 1, args = 0x2fd5e68}\n(gdb) p *fcinfo->args\n$2 = {value = 140551873462280, isnull = false}\n\nJust now, this cluster was killed while creating an index on a separate table:\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x00007ff549e53f33 in __memcpy_sse2 () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007ff549e53f33 in __memcpy_sse2 () from /lib64/libc.so.6\n#1 0x00000000009fee14 in varstrfastcmp_locale (a1p=0x363e424 \"\", len1=-4, a2p=0x3a81a51 \"\", len2=15, ssup=0x2e100c8) at varlena.c:2337\n#2 0x00000000009febbd in varlenafastcmp_locale (x=56878112, y=61348432, ssup=0x2e100c8) at varlena.c:2249\n#3 0x0000000000a6ea4d in ApplySortComparator (datum1=56878112, isNull1=false, datum2=61348432, isNull2=false, ssup=0x2e100c8) at ../../../../src/include/utils/sortsupport.h:224\n#4 0x0000000000a7938f in comparetup_index_btree (a=0x7ff53b375798, b=0x7ff53a4a5048, state=0x2e0fc28) at tuplesort.c:4147\n#5 0x0000000000a6f237 in qsort_tuple (a=0x7ff53a4a5048, n=1071490, cmp_tuple=0xa79318 <comparetup_index_btree>, state=0x2e0fc28) at qsort_tuple.c:150\n#6 0x0000000000a74b53 in tuplesort_sort_memtuples (state=0x2e0fc28) at tuplesort.c:3490\n#7 0x0000000000a7427b in dumptuples (state=0x2e0fc28, alltuples=true) at tuplesort.c:3156\n#8 0x0000000000a72397 in tuplesort_performsort (state=0x2e0fc28) at tuplesort.c:2038\n#9 0x00000000005011d2 in _bt_leafbuild (btspool=0x2e139a8, btspool2=0x0) at nbtsort.c:553\n#10 0x0000000000500d56 in btbuild (heap=0x7ff54afef410, index=0x7ff54aff5250, indexInfo=0x2d5e8e8) at nbtsort.c:333\n#11 0x00000000005787d1 in index_build (heapRelation=0x7ff54afef410, indexRelation=0x7ff54aff5250, indexInfo=0x2d5e8e8, isreindex=false, parallel=true) at index.c:2962\n#12 0x0000000000575ba7 in index_create (heapRelation=0x7ff54afef410, indexRelationName=0x2d624a8 \"cdrs_huawei_sgsnpdprecord_2021_02_12_servedimsi_idx\", indexRelationId=3880431557, parentIndexRelid=0, parentConstraintId=0,\n relFileNode=0, indexInfo=0x2d5e8e8, indexColNames=0x2e12ae0, accessMethodObjectId=403, tableSpaceId=3787872951, collationObjectId=0x2e12c08, classObjectId=0x2e12c50, coloptions=0x2e12c68, reloptions=48311088, flags=0,\n constr_flags=0, allow_system_table_mods=false, is_internal=false, constraintId=0x7ffc8964c23c) at index.c:1231\n#13 0x0000000000651cd9 in DefineIndex (relationId=3840862493, stmt=0x2d5e7a8, indexRelationId=0, parentIndexId=0, parentConstraintId=0, is_alter_table=false, check_rights=true, check_not_in_use=true, skip_build=false,\n quiet=false) at indexcmds.c:1105\n#14 0x00000000008c620d in ProcessUtilitySlow (pstate=0x2d62398, pstmt=0x2d3bd78,\n queryString=0x2d3ae48 \"CREATE INDEX ii ON tt (cc) WITH (FILLFACTOR=100) TABLESPACE cdr_index;\",\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2d3c038, qc=0x7ffc8964cdd0) at utility.c:1517\n\n(gdb) up\n#2 0x00000000009febbd in varlenafastcmp_locale (x=56878112, y=61348432, ssup=0x2e100c8) at varlena.c:2249\n2249 result = varstrfastcmp_locale(a1p, len1, a2p, len2, ssup);\n\n(gdb) l\n2244 a2p = VARDATA_ANY(arg2);\n2245\n2246 len1 = VARSIZE_ANY_EXHDR(arg1);\n2247 len2 = VARSIZE_ANY_EXHDR(arg2);\n2248\n2249 result = varstrfastcmp_locale(a1p, len1, a2p, len2, ssup);\n2250\n2251 /* We can't afford to leak memory here. */\n2252 if (PointerGetDatum(arg1) != x)\n2253 pfree(arg1);\n\n(gdb) p len1\n$1 = -4\n(gdb) p len2\n$2 = 15\n\nAnd now I was able to crash again by creating index on a 3rd, similar table.\n\nFYI, this is running centos 7.8.\n\n\n", "msg_date": "Fri, 12 Feb 2021 11:02:52 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg13.2: invalid memory alloc request size NNNN" }, { "msg_contents": "\n\nOn 2/12/21 2:48 AM, Justin Pryzby wrote:\n> ts=# \\errverbose\n> ERROR: XX000: invalid memory alloc request size 18446744073709551613\n> \n> #0 pg_re_throw () at elog.c:1716\n> #1 0x0000000000a33b12 in errfinish (filename=0xbff20e \"mcxt.c\", lineno=959, funcname=0xbff2db <__func__.6684> \"palloc\") at elog.c:502\n> #2 0x0000000000a6760d in palloc (size=18446744073709551613) at mcxt.c:959\n> #3 0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212\n> #4 0x00000000009fbf05 in textout (fcinfo=0x2094538) at varlena.c:557\n> #5 0x00000000006bdd50 in ExecInterpExpr (state=0x2093990, econtext=0x20933d8, isnull=0x7fff5bf04a87) at execExprInterp.c:1112\n> #6 0x00000000006d4f18 in ExecEvalExprSwitchContext (state=0x2093990, econtext=0x20933d8, isNull=0x7fff5bf04a87) at ../../../src/include/executor/executor.h:316\n> #7 0x00000000006d4f81 in ExecProject (projInfo=0x2093988) at ../../../src/include/executor/executor.h:350\n> #8 0x00000000006d5371 in ExecScan (node=0x20932c8, accessMtd=0x7082e0 <SeqNext>, recheckMtd=0x708385 <SeqRecheck>) at execScan.c:238\n> #9 0x00000000007083c2 in ExecSeqScan (pstate=0x20932c8) at nodeSeqscan.c:112\n> #10 0x00000000006d1b00 in ExecProcNodeInstr (node=0x20932c8) at execProcnode.c:466\n> #11 0x00000000006e742c in ExecProcNode (node=0x20932c8) at ../../../src/include/executor/executor.h:248\n> #12 0x00000000006e77de in ExecAppend (pstate=0x2089208) at nodeAppend.c:267\n> #13 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2089208) at execProcnode.c:466\n> #14 0x000000000070964f in ExecProcNode (node=0x2089208) at ../../../src/include/executor/executor.h:248\n> #15 0x0000000000709795 in ExecSort (pstate=0x2088ff8) at nodeSort.c:108\n> #16 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088ff8) at execProcnode.c:466\n> #17 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088ff8) at execProcnode.c:450\n> #18 0x00000000006dec36 in ExecProcNode (node=0x2088ff8) at ../../../src/include/executor/executor.h:248\n> #19 0x00000000006df079 in fetch_input_tuple (aggstate=0x2088a20) at nodeAgg.c:589\n> #20 0x00000000006e1fad in agg_retrieve_direct (aggstate=0x2088a20) at nodeAgg.c:2368\n> #21 0x00000000006e1bfd in ExecAgg (pstate=0x2088a20) at nodeAgg.c:2183\n> #22 0x00000000006d1b00 in ExecProcNodeInstr (node=0x2088a20) at execProcnode.c:466\n> #23 0x00000000006d1ad1 in ExecProcNodeFirst (node=0x2088a20) at execProcnode.c:450\n> #24 0x00000000006c6ffa in ExecProcNode (node=0x2088a20) at ../../../src/include/executor/executor.h:248\n> #25 0x00000000006c966b in ExecutePlan (estate=0x2032f48, planstate=0x2088a20, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0xbb3400 <donothingDR>,\n> execute_once=true) at execMain.c:1632\n> \n> #3 0x00000000009fb149 in text_to_cstring (t=0x2aaae8023010) at varlena.c:212\n> 212 result = (char *) palloc(len + 1);\n> \n> (gdb) l\n> 207 /* must cast away the const, unfortunately */\n> 208 text *tunpacked = pg_detoast_datum_packed(unconstify(text *, t));\n> 209 int len = VARSIZE_ANY_EXHDR(tunpacked);\n> 210 char *result;\n> 211\n> 212 result = (char *) palloc(len + 1);\n> \n> (gdb) p len\n> $1 = -4\n> \n> This VM had some issue early today and I killed the VM, causing PG to execute\n> recovery. I'm tentatively blaming that on zfs, so this could conceivably be a\n> data error (although recovery supposedly would have resolved it). I just\n> checked and data_checksums=off.\n> \n\nThis seems very much like a corrupted varlena header - length (-4) is \nclearly bogus, and it's what triggers the problem, because that's what \nwraps around to 18446744073709551613 (which is 0xFFFFFFFFFFFFFFFD).\n\nThis has to be a value stored in a table, not some intermediate value \ncreated during execution. So I don't think the exact query matters. Can \nyou try doing something like pg_dump, which has to detoast everything?\n\nThe question is whether this is due to the VM getting killed in some \nstrange way (what VM system is this, how is the storage mounted?) or \nwhether the recovery is borked and failed to do the right thing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Feb 2021 18:44:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg13.2: invalid memory alloc request size NNNN" }, { "msg_contents": "On Fri, Feb 12, 2021 at 06:44:54PM +0100, Tomas Vondra wrote:\n> > (gdb) p len\n> > $1 = -4\n> > \n> > This VM had some issue early today and I killed the VM, causing PG to execute\n> > recovery. I'm tentatively blaming that on zfs, so this could conceivably be a\n> > data error (although recovery supposedly would have resolved it). I just\n> > checked and data_checksums=off.\n> \n> This seems very much like a corrupted varlena header - length (-4) is\n> clearly bogus, and it's what triggers the problem, because that's what wraps\n> around to 18446744073709551613 (which is 0xFFFFFFFFFFFFFFFD).\n> \n> This has to be a value stored in a table, not some intermediate value\n> created during execution. So I don't think the exact query matters. Can you\n> try doing something like pg_dump, which has to detoast everything?\n\nRight, COPY fails and VACUUM FULL crashes.\n\nmessage | invalid memory alloc request size 18446744073709551613\nquery | COPY child.tt TO '/dev/null';\n\n> The question is whether this is due to the VM getting killed in some strange\n> way (what VM system is this, how is the storage mounted?) or whether the\n> recovery is borked and failed to do the right thing.\n\nThis is qemu/kvm, with block storage:\n <driver name='qemu' type='raw' cache='none' io='native'/>\n <source dev='/dev/data/postgres'/>\n\nAnd then more block devices for ZFS vdevs:\n <driver name='qemu' type='raw' cache='none' io='native'/>\n <source dev='/dev/data/zfs2'/>\n ...\n\nThose are LVM volumes (I know that ZFS/LVM is discouraged).\n\n$ zpool list -v\nNAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT\nzfs 762G 577G 185G - - 71% 75% 1.00x ONLINE -\n vdj 127G 92.7G 34.3G - - 64% 73.0% - ONLINE \n vdd 127G 95.6G 31.4G - - 74% 75.2% - ONLINE \n vdf 127G 96.0G 31.0G - - 75% 75.6% - ONLINE \n vdg 127G 95.8G 31.2G - - 74% 75.5% - ONLINE \n vdh 127G 95.5G 31.5G - - 74% 75.2% - ONLINE \n vdi 128G 102G 25.7G - - 71% 79.9% - ONLINE \n\nThis is recently upgraded to ZFS 2.0.0, and then to 2.0.1:\n\nJan 21 09:33:26 Installed: zfs-dkms-2.0.1-1.el7.noarch\nDec 23 08:41:21 Installed: zfs-dkms-2.0.0-1.el7.noarch\n\nThe VM has gotten \"wedged\" and I've had to kill it a few times in the last 24h\n(needless to say this is not normal). That part seems like a kernel issue and\nnot postgres problem. It's unclear if that's due to me trying to tickle the\npostgres ERROR. It's the latest centos7 kernel: 3.10.0-1160.15.2.el7.x86_64\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Feb 2021 12:10:52 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: pg13.2: invalid memory alloc request size NNNN" }, { "msg_contents": "I think to get a size of -4 you would be trying to read a varlena\npointer pointing to four nul bytes. I bet if you run dd on the\ncorresponding block you'll find a chunk of nuls in the page. That\nperhaps makes sense with ZFS where if a new page was linked to the\ntree but never written it would be an uninitialized page rather than\nthe old data.\n\nI'm becoming increasingly convinced that there are a lot of storage\nsystems out there that just lose data whenever they crash or lose\npower. Systems that are supposed to be better than that.\n\n\n", "msg_date": "Sat, 13 Feb 2021 06:35:07 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pg13.2: invalid memory alloc request size NNNN" } ]
[ { "msg_contents": "There is another snowball release out, and I have prepared a patch to \nintegrate it. It's very big and mostly boring, so I'm not attaching it \nhere, but you can see it at\n\nhttps://github.com/petere/postgresql/commit/d0aa6c2148bcef10942959035ce14f1810873593.patch\n\nMajor changes are new stemmers for Armenian, Serbian, and Yiddish.\n\n\n", "msg_date": "Fri, 12 Feb 2021 11:14:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "snowball update" }, { "msg_contents": "On Fri, Feb 12, 2021 at 1:14 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> There is another snowball release out, and I have prepared a patch to\n> integrate it. It's very big and mostly boring, so I'm not attaching it\n> here, but you can see it at\n>\n> https://github.com/petere/postgresql/commit/d0aa6c2148bcef10942959035ce14f1810873593.patch\n>\n> Major changes are new stemmers for Armenian, Serbian, and Yiddish.\n>\n>\n\nGood time to add new languages.\n\nWe don't have (and really it's impossible) regression test for stemmers, so\nmaybe we should warn users about possible inconsistencies of old\ntsvectors and new stemmers ?\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 13 Feb 2021 00:00:21 +0300", "msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: snowball update" }, { "msg_contents": "On 2021-02-12 22:00, Oleg Bartunov wrote:\n> We don't have (and really it's impossible) regression test for stemmers, so\n> maybe we should warn users about possible inconsistencies of old\n> tsvectors and new stemmers ?\n\nYeah, it's analogous to collation and Unicode updates. We could invent \na versioning mechanism; we have some of the infrastructure for that now. \n But until we do that, perhaps some more elaborate guidance in the \nmajor version release notes would be appropriate.\n\n\n", "msg_date": "Mon, 15 Feb 2021 14:20:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: snowball update" } ]
[ { "msg_contents": "Hello,\n\nI am seeing errors in replication in a test program that I've been running for years with very little change (since 2017, really [1]).\n\nThe symptom:\nHEAD-replication fails (most of the time) when cascading 3 instances (master+2 replicas).\n\nHEAD-replication works with 2 instances (master+replica).\n\nI have also compiled a server on top of 4ad31bb2ef25 (avoiding some recent changes) - and this server runs the same test program without failure; so I think the culprit might be somewhere in those changes. Or (always possible) there might be something my testing does wrong - but then again, I do this test a few times every week, and it never fails.\n\nThis weekend I can dig into it some more (make a self-sufficient example) but I thought I'd mention it already. perhaps one of you see the light immediately...\n\n\nErik Rijkers\n\n[1] https://www.postgresql.org/message-id/flat/3897361c7010c4ac03f358173adbcd60%40xs4all.nl\n\n\n", "msg_date": "Fri, 12 Feb 2021 13:33:57 +0100 (CET)", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "logical replication seems broken" }, { "msg_contents": "On Fri, Feb 12, 2021 at 6:04 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Hello,\n>\n> I am seeing errors in replication in a test program that I've been running for years with very little change (since 2017, really [1]).\n>\n> The symptom:\n> HEAD-replication fails (most of the time) when cascading 3 instances (master+2 replicas).\n>\n> HEAD-replication works with 2 instances (master+replica).\n>\n> I have also compiled a server on top of 4ad31bb2ef25 (avoiding some recent changes) - and this server runs the same test program without failure; so I think the culprit might be somewhere in those changes.\n>\n\nOh, if you are running this on today's HEAD then the recent commit\nbff456d7a0 could be the culprit but not sure without knowing the\ndetails.\n\n> Or (always possible) there might be something my testing does wrong - but then again, I do this test a few times every week, and it never fails.\n>\n\nDo you expect anything in particular while running this test, any\nexpected messages? What is exactly failing?\n\n> This weekend I can dig into it some more (make a self-sufficient example) but I thought I'd mention it already. perhaps one of you see the light immediately...\n>\n\nThat would be really helpful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Feb 2021 18:21:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "> On 02/12/2021 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> \n> On Fri, Feb 12, 2021 at 6:04 PM Erik Rijkers <er@xs4all.nl> wrote:\n> >\n> > Hello,\n> >\n> > I am seeing errors in replication in a test program that I've been running for years with very little change (since 2017, really [1]).\n\nHi,\n\nHere is a test program. Careful, it deletes stuff. And it will need some changes:\n\nI compile postgres server versions into directories like:\n $HOME/pg_stuff/pg_installations/pgsql.$project where project is a name\n\nThe attached script (logrep_cascade_bug.sh) assumes that two such compiled versions are present (on my machine they are called HEAD and head0):\n $HOME/pg_stuff/pg_installations/pgsql.HEAD --> git master as of today - friday 12 febr 2021\n $HOME/pg_stuff/pg_installations/pgsql.head0 --> 3063eb17593c so that's from 11 febr, before the replication changes\n\nIn the test script, bash variables 'project' (and 'BIN') reflect my set up - so should probably be changed.\n\nThe instance from today 12 february ('HEAD') has the bug:\n it keeps endlessly waiting/looping with 'NOK' (=Not OK).\n 'Not OK' means: primary not identical to all replicas (replica1 seems ok, but replica2 remains empty)\n\nThe instance from yesterday 11 february ('head0') is ok:\n it finishes in 20 s after waiting/looping just 2 or 3 times\n 'ok' means: all replicas are identical to primary (as proven by the md5s).\n\nThat's all I have for now - I have no deeper idea about what exactly goes wrong.\n\nI hope that helps, let me know when you cannot reproduce the problem.\n\nErik Rijkers", "msg_date": "Fri, 12 Feb 2021 17:30:31 +0100 (CET)", "msg_from": "er@xs4all.nl", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "On Fri, Feb 12, 2021 at 10:00 PM <er@xs4all.nl> wrote:\n>\n> > On 02/12/2021 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > On Fri, Feb 12, 2021 at 6:04 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > >\n> > > Hello,\n> > >\n> > > I am seeing errors in replication in a test program that I've been running for years with very little change (since 2017, really [1]).\n>\n> Hi,\n>\n> Here is a test program. Careful, it deletes stuff. And it will need some changes:\n>\n\nThanks for sharing the test. I think I have found the problem.\nActually, it was an existing code problem exposed by the commit\nce0fdbfe97. In pgoutput_begin_txn(), we were sometimes sending the\nprepare_write ('w') message but then the actual message was not being\nsent. This was the case when we didn't found the origin of a txn. This\ncan happen after that commit because we have now started using origins\nfor tablesync workers as well and those origins are removed once the\ntablesync workers are finished. We might want to change the behavior\nrelated to the origin messages as indicated in the comments but for\nnow, fixing the existing code.\n\nCan you please test if the attached fixes the problem at your end as well?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 13 Feb 2021 16:19:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "> On 02/13/2021 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Fri, Feb 12, 2021 at 10:00 PM <er@xs4all.nl> wrote:\n> >\n> > > On 02/12/2021 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Feb 12, 2021 at 6:04 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > >\n> > > > I am seeing errors in replication in a test program that I've been running for years with very little change (since 2017, really [1]).\n> >\n> > Hi,\n> >\n> > Here is a test program. Careful, it deletes stuff. And it will need some changes:\n> >\n> \n> Thanks for sharing the test. I think I have found the problem.\n> Actually, it was an existing code problem exposed by the commit\n> ce0fdbfe97. In pgoutput_begin_txn(), we were sometimes sending the\n> prepare_write ('w') message but then the actual message was not being\n> sent. This was the case when we didn't found the origin of a txn. This\n> can happen after that commit because we have now started using origins\n> for tablesync workers as well and those origins are removed once the\n> tablesync workers are finished. We might want to change the behavior\n> related to the origin messages as indicated in the comments but for\n> now, fixing the existing code.\n> \n> Can you please test if the attached fixes the problem at your end as well?\n\n> [fix_origin_message_1.patch]\n\nI compiled just now a binary from HEAD, and a binary from HEAD+patch\n\nHEAD is still broken; your patch rescues it, so yes, fixed.\n\nMaybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n\n\nThanks,\n\nErik Rijkers\n \n\n\n\n\n\n\n> \n> -- \n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Sat, 13 Feb 2021 13:28:29 +0100 (CET)", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "On Sat, Feb 13, 2021 at 5:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n>\n> > On 02/13/2021 11:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 12, 2021 at 10:00 PM <er@xs4all.nl> wrote:\n> > >\n> > > > On 02/12/2021 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Fri, Feb 12, 2021 at 6:04 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > > >\n> > > > > I am seeing errors in replication in a test program that I've been running for years with very little change (since 2017, really [1]).\n> > >\n> > > Hi,\n> > >\n> > > Here is a test program. Careful, it deletes stuff. And it will need some changes:\n> > >\n> >\n> > Thanks for sharing the test. I think I have found the problem.\n> > Actually, it was an existing code problem exposed by the commit\n> > ce0fdbfe97. In pgoutput_begin_txn(), we were sometimes sending the\n> > prepare_write ('w') message but then the actual message was not being\n> > sent. This was the case when we didn't found the origin of a txn. This\n> > can happen after that commit because we have now started using origins\n> > for tablesync workers as well and those origins are removed once the\n> > tablesync workers are finished. We might want to change the behavior\n> > related to the origin messages as indicated in the comments but for\n> > now, fixing the existing code.\n> >\n> > Can you please test if the attached fixes the problem at your end as well?\n>\n> > [fix_origin_message_1.patch]\n>\n> I compiled just now a binary from HEAD, and a binary from HEAD+patch\n>\n> HEAD is still broken; your patch rescues it, so yes, fixed.\n>\n> Maybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n>\n\n+1 for the idea of having a test for this. I have written a test for this.\nThanks for the fix Amit, I could reproduce the issue without your fix\nand verified that the issue gets fixed with the patch you shared.\nAttached a patch for the same. Thoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 15 Feb 2021 11:53:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "On Mon, Feb 15, 2021 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, Feb 13, 2021 at 5:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> >\n> >\n> > I compiled just now a binary from HEAD, and a binary from HEAD+patch\n> >\n> > HEAD is still broken; your patch rescues it, so yes, fixed.\n> >\n> > Maybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n> >\n>\n> +1 for the idea of having a test for this. I have written a test for this.\n> Thanks for the fix Amit, I could reproduce the issue without your fix\n> and verified that the issue gets fixed with the patch you shared.\n> Attached a patch for the same. Thoughts?\n>\n\nI have slightly modified the comments in the test case to make things\nclear. I am planning not to backpatch this because there is no way in\nthe core code to hit this prior to commit ce0fdbfe97 and we haven't\nreceived any complaints so far. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 15 Feb 2021 17:01:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "On Mon, Feb 15, 2021 at 5:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 15, 2021 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, Feb 13, 2021 at 5:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > >\n> > >\n> > > I compiled just now a binary from HEAD, and a binary from HEAD+patch\n> > >\n> > > HEAD is still broken; your patch rescues it, so yes, fixed.\n> > >\n> > > Maybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n> > >\n> >\n> > +1 for the idea of having a test for this. I have written a test for this.\n> > Thanks for the fix Amit, I could reproduce the issue without your fix\n> > and verified that the issue gets fixed with the patch you shared.\n> > Attached a patch for the same. Thoughts?\n> >\n>\n> I have slightly modified the comments in the test case to make things\n> clear. I am planning not to backpatch this because there is no way in\n> the core code to hit this prior to commit ce0fdbfe97 and we haven't\n> received any complaints so far. What do you think?\n\nThe changes look fine to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:36:38 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "\n> On 2021.02.15. 12:31 Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Feb 15, 2021 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n> > On Sat, Feb 13, 2021 at 5:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > I compiled just now a binary from HEAD, and a binary from HEAD+patch\n> > > HEAD is still broken; your patch rescues it, so yes, fixed.\n> > > Maybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n> > >\n> > +1 for the idea of having a test for this. I have written a test for this.\n> > Thanks for the fix Amit, I could reproduce the issue without your fix\n> > and verified that the issue gets fixed with the patch you shared.\n> > Attached a patch for the same. Thoughts?\n> >\n> \n> I have slightly modified the comments in the test case to make things\n> clear. I am planning not to backpatch this because there is no way in\n> the core code to hit this prior to commit ce0fdbfe97 and we haven't\n> received any complaints so far. What do you think?\n\nMy tests indeed run OK with this.\n\n(I haven't tested whether the newly added test actually tests for the problem that was there - I suppose one of you did that)\n\nThanks!\n\nErik Rijkers\n\n\n", "msg_date": "Mon, 15 Feb 2021 13:44:51 +0100 (CET)", "msg_from": "er@xs4all.nl", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "On Mon, Feb 15, 2021 at 6:14 PM <er@xs4all.nl> wrote:\n>\n>\n> > On 2021.02.15. 12:31 Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Feb 15, 2021 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > On Sat, Feb 13, 2021 at 5:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > > I compiled just now a binary from HEAD, and a binary from HEAD+patch\n> > > > HEAD is still broken; your patch rescues it, so yes, fixed.\n> > > > Maybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n> > > >\n> > > +1 for the idea of having a test for this. I have written a test for this.\n> > > Thanks for the fix Amit, I could reproduce the issue without your fix\n> > > and verified that the issue gets fixed with the patch you shared.\n> > > Attached a patch for the same. Thoughts?\n> > >\n> >\n> > I have slightly modified the comments in the test case to make things\n> > clear. I am planning not to backpatch this because there is no way in\n> > the core code to hit this prior to commit ce0fdbfe97 and we haven't\n> > received any complaints so far. What do you think?\n>\n> My tests indeed run OK with this.\n>\n> (I haven't tested whether the newly added test actually tests for the problem that was there - I suppose one of you did that)\n>\n\nI could re-create the scenario that you had faced with this test. This\ntest case is a simplified test of your script, I have removed the use\nof pgbench, reduced the number of tables used and simulated the same\nproblem with the similar replication setup that you had used.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 15 Feb 2021 19:50:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" }, { "msg_contents": "On Mon, Feb 15, 2021 at 7:50 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Feb 15, 2021 at 6:14 PM <er@xs4all.nl> wrote:\n> >\n> >\n> > > On 2021.02.15. 12:31 Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Mon, Feb 15, 2021 at 11:53 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > On Sat, Feb 13, 2021 at 5:58 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > > > > I compiled just now a binary from HEAD, and a binary from HEAD+patch\n> > > > > HEAD is still broken; your patch rescues it, so yes, fixed.\n> > > > > Maybe a test (check or check-world) should be added to run a second replica? (Assuming that would have caught this bug)\n> > > > >\n> > > > +1 for the idea of having a test for this. I have written a test for this.\n> > > > Thanks for the fix Amit, I could reproduce the issue without your fix\n> > > > and verified that the issue gets fixed with the patch you shared.\n> > > > Attached a patch for the same. Thoughts?\n> > > >\n> > >\n> > > I have slightly modified the comments in the test case to make things\n> > > clear. I am planning not to backpatch this because there is no way in\n> > > the core code to hit this prior to commit ce0fdbfe97 and we haven't\n> > > received any complaints so far. What do you think?\n> >\n> > My tests indeed run OK with this.\n> >\n> > (I haven't tested whether the newly added test actually tests for the problem that was there - I suppose one of you did that)\n> >\n>\n> I could re-create the scenario that you had faced with this test. This\n> test case is a simplified test of your script, I have removed the use\n> of pgbench, reduced the number of tables used and simulated the same\n> problem with the similar replication setup that you had used.\n>\n\nYeah, I was also able to see an issue with this test without a patch\nand it got fixed after the patch. Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Feb 2021 09:07:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: logical replication seems broken" } ]
[ { "msg_contents": "Hi,\n\nI wonder, is there a specific reason that MakeTupleTableSlot is\nwrapped up in MakeSingleTupleTableSlot without doing anything than\njust returning the slot created by MakeTupleTableSlot? Do we really\nneed MakeSingleTupleTableSlot? Can't we just use MakeTupleTableSlot\ndirectly? Am I missing something?\n\nI think we can avoid some unnecessary function call costs, for\ninstance when called 1000 times inside table_slot_create from\ncopyfrom.c or in some other places where MakeSingleTupleTableSlot is\ncalled in a loop.\n\nIf it's okay to remove MakeSingleTupleTableSlot and use\nMakeTupleTableSlot instead, we might have to change in a lot of\nplaces. If we don't want to change in those many files, we could\nrename MakeTupleTableSlot to MakeSingleTupleTableSlot and change it in\nonly a few places.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Feb 2021 19:13:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Why do we have MakeSingleTupleTableSlot instead of not using\n MakeTupleTableSlot?" }, { "msg_contents": "Hi,\nMakeSingleTupleTableSlot can be defined as a macro, calling\nMakeTupleTableSlot().\n\nCheers\n\nOn Fri, Feb 12, 2021 at 5:44 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> I wonder, is there a specific reason that MakeTupleTableSlot is\n> wrapped up in MakeSingleTupleTableSlot without doing anything than\n> just returning the slot created by MakeTupleTableSlot? Do we really\n> need MakeSingleTupleTableSlot? Can't we just use MakeTupleTableSlot\n> directly? Am I missing something?\n>\n> I think we can avoid some unnecessary function call costs, for\n> instance when called 1000 times inside table_slot_create from\n> copyfrom.c or in some other places where MakeSingleTupleTableSlot is\n> called in a loop.\n>\n> If it's okay to remove MakeSingleTupleTableSlot and use\n> MakeTupleTableSlot instead, we might have to change in a lot of\n> places. If we don't want to change in those many files, we could\n> rename MakeTupleTableSlot to MakeSingleTupleTableSlot and change it in\n> only a few places.\n>\n> Thoughts?\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\n\nHi,MakeSingleTupleTableSlot can be defined as a macro, calling MakeTupleTableSlot().CheersOn Fri, Feb 12, 2021 at 5:44 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,\n\nI wonder, is there a specific reason that MakeTupleTableSlot is\nwrapped up in MakeSingleTupleTableSlot without doing anything than\njust returning the slot created by MakeTupleTableSlot? Do we really\nneed MakeSingleTupleTableSlot? Can't we just use MakeTupleTableSlot\ndirectly? Am I missing something?\n\nI think we can avoid some unnecessary function call costs, for\ninstance when called 1000 times inside table_slot_create from\ncopyfrom.c or in some other places where MakeSingleTupleTableSlot is\ncalled in a loop.\n\nIf it's okay to remove MakeSingleTupleTableSlot and use\nMakeTupleTableSlot instead, we might have to change in a lot of\nplaces. If we don't want to change in those many files, we could\nrename MakeTupleTableSlot to MakeSingleTupleTableSlot and change it in\nonly a few places.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Feb 2021 08:09:56 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Why do we have MakeSingleTupleTableSlot instead of not using\n MakeTupleTableSlot?" }, { "msg_contents": "On Fri, Feb 12, 2021 at 9:37 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Fri, Feb 12, 2021 at 5:44 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> I wonder, is there a specific reason that MakeTupleTableSlot is\n>> wrapped up in MakeSingleTupleTableSlot without doing anything than\n>> just returning the slot created by MakeTupleTableSlot? Do we really\n>> need MakeSingleTupleTableSlot? Can't we just use MakeTupleTableSlot\n>> directly? Am I missing something?\n>>\n>> I think we can avoid some unnecessary function call costs, for\n>> instance when called 1000 times inside table_slot_create from\n>> copyfrom.c or in some other places where MakeSingleTupleTableSlot is\n>> called in a loop.\n>>\n>> If it's okay to remove MakeSingleTupleTableSlot and use\n>> MakeTupleTableSlot instead, we might have to change in a lot of\n>> places. If we don't want to change in those many files, we could\n>> rename MakeTupleTableSlot to MakeSingleTupleTableSlot and change it in\n>> only a few places.\n>>\n>> Thoughts?\n>\n> MakeSingleTupleTableSlot can be defined as a macro, calling MakeTupleTableSlot().\n\nRight, we could as well have an inline function. My point was that why\ndo we need to wrap MakeTupleTableSlot inside MakeSingleTupleTableSlot\nwhich just does nothing. As I said upthread, how about renaming\nMakeTupleTableSlot to\nMakeSingleTupleTableSlot which requires minimal changes? Patch\nattached. Both make check and make check-world passes on it. Please\nhave a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sat, 13 Feb 2021 09:54:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Why do we have MakeSingleTupleTableSlot instead of not using\n MakeTupleTableSlot?" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Right, we could as well have an inline function. My point was that why\n> do we need to wrap MakeTupleTableSlot inside MakeSingleTupleTableSlot\n> which just does nothing. As I said upthread, how about renaming\n> MakeTupleTableSlot to\n> MakeSingleTupleTableSlot which requires minimal changes?\n\nI'm disinclined to change this just to save one level of function call.\nIf you dig in the git history (see f92e8a4b5 in particular) you'll note\nthat the current version of MakeTupleTableSlot originated as code shared\nbetween ExecAllocTableSlot and MakeSingleTupleTableSlot. The fact that\nthe latter is currently just equivalent to that shared functionality is\nsomething that happened later and might need to change again.\n\nIt is fair to wonder why execTuples.c exports MakeTupleTableSlot at\nall, though. ExecAllocTableSlot is supposed to be used by code that\nexpects ExecutorEnd to clean up the slot, while MakeSingleTupleTableSlot\nis supposed to pair with ExecDropSingleTupleTableSlot. Direct use of\nMakeTupleTableSlot leaves one wondering who is holding the bag for\nslot cleanup. The external callers of it all look to be pretty new\ncode, so I wonder how carefully that's been thought through.\n\nIn short: I'm not okay with doing\ns/MakeTupleTableSlot/MakeSingleTupleTableSlot/g in a patch that doesn't\nalso introduce matching ExecDropSingleTupleTableSlot calls (unless those\nexist somewhere already; but where?). If we did clean that up, maybe\nMakeTupleTableSlot could become \"static\". But I'd still be inclined to\nkeep it physically separate, leaving it to the compiler to decide whether\nto inline it into the callers.\n\nThere's a separate question of whether any of the call sites that lack\ncleanup support represent live resource-leak bugs. I see that they all\nuse TTSOpsVirtual, so maybe that's a slot type that never holds any\ninteresting resources (like buffer pins). If so, maybe the best thing is\nto invent a wrapper \"MakeVirtualTupleTableSlot\" or the like, ensuring such\ncallers use a TupleTableSlotOps type that doesn't require cleanup.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Feb 2021 00:51:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why do we have MakeSingleTupleTableSlot instead of not using\n MakeTupleTableSlot?" } ]
[ { "msg_contents": "I was trying to use triggers, and ran into something I hadn't realized\nuntil now: triggers run, not as the owner of the table, but as the user who\nis doing the insert/update/delete.\n\nIt seems to me that for a lot of the suggested uses of triggers this is not\nthe desired behaviour. For example, in the canonical \"logging table\"\nexample, we want to be able to allow users to make changes to the base\ntable, but we don't want them to be able to insert to the log table except\nindirectly by causing the trigger to fire. Even in cases where a trigger is\njust checking whether the update is permissible, it seems to me that it\nmight be useful to refer to information not accessible to the user doing\nthe changes.\n\nHas there been any discussion of this before? I know Postgres has had\ntriggers for a long time but at the same time I don't see how anybody could\ndo a lot of work with triggers without finding this to be a problem at some\npoint. I haven't found any discussion of this in the documentation.\n\nI was trying to use triggers, and ran into something I hadn't realized until now: triggers run, not as the owner of the table, but as the user who is doing the insert/update/delete.It seems to me that for a lot of the suggested uses of triggers this is not the desired behaviour. For example, in the canonical \"logging table\" example, we want to be able to allow users to make changes to the base table, but we don't want them to be able to insert to the log table except indirectly by causing the trigger to fire. Even in cases where a trigger is just checking whether the update is permissible, it seems to me that it might be useful to refer to information not accessible to the user doing the changes.Has there been any discussion of this before? I know Postgres has had triggers for a long time but at the same time I don't see how anybody could do a lot of work with triggers without finding this to be a problem at some point. I haven't found any discussion of this in the documentation.", "msg_date": "Fri, 12 Feb 2021 12:35:55 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Trigger execution role" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> I was trying to use triggers, and ran into something I hadn't realized\n> until now: triggers run, not as the owner of the table, but as the user who\n> is doing the insert/update/delete.\n\nIf you don't want that, you can make the trigger function SECURITY\nDEFINER. If we forced such behavior, there'd be no way to get the\nother behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Feb 2021 12:58:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trigger execution role" }, { "msg_contents": "On Fri, 12 Feb 2021 at 12:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > I was trying to use triggers, and ran into something I hadn't realized\n> > until now: triggers run, not as the owner of the table, but as the user\n> who\n> > is doing the insert/update/delete.\n>\n> If you don't want that, you can make the trigger function SECURITY\n> DEFINER. If we forced such behavior, there'd be no way to get the\n> other behavior.\n>\n\nI did think about SECURITY DEFINER, but that has at least a couple of\nsignificant downsides:\n\n- can't re-use the same generic trigger function for different table\nowners; would need to duplicate the function and use the one whose owner\nmatches the table\n- other users could make the function a trigger for their tables and then\ninvoke it unexpectedly (i.e., in a scenario I didn’t anticipate)\n- have to grant EXECUTE on the function to the same users that need\npermission to change the table contents\n\nIn what scenarios is it needed for the trigger to run as the role doing the\nINSERT/UPDATE/DELETE? There are lots of scenarios where it doesn't matter —\nI can think of any number of constraint enforcement triggers that just\ncompute a boolean and which could run as either — but I find it a lot\neasier to imagine a scenario where the table owner wants to do something\nwhen an INSERT/UPDATE/DELETE occurs than one in which the table owner wants\nto make sure the role changing the table does something.\n\nAdditionally, with the present behaviour, what happens when I change a\ntable's contents is completely unpredictable. A table could have a trigger\non it which drops all my tables, to take an extreme example. If “I” am\npostgres then this could be problematic: it’s not safe for a privileged\nuser to make changes to the contents of another role’s tables unless they\nare first verified to have no triggers on them (or, in theory, that the\ntriggers are harmless, but I’ve been playing enough Baba is You lately to\nconsider any judgement that the triggers are harmless to be worthless\nwithout a formally verified proof of same).\n\nOn Fri, 12 Feb 2021 at 12:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> I was trying to use triggers, and ran into something I hadn't realized\n> until now: triggers run, not as the owner of the table, but as the user who\n> is doing the insert/update/delete.\n\nIf you don't want that, you can make the trigger function SECURITY\nDEFINER.  If we forced such behavior, there'd be no way to get the\nother behavior.I did think about SECURITY DEFINER, but that has at least a couple of significant downsides:- can't re-use the same generic trigger function for different table owners; would need to duplicate the function and use the one whose owner matches the table- other users could make the function a trigger for their tables and then invoke it unexpectedly (i.e., in a scenario I didn’t anticipate)- have to grant EXECUTE on the function to the same users that need permission to change the table contentsIn what scenarios is it needed for the trigger to run as the role doing the INSERT/UPDATE/DELETE? There are lots of scenarios where it doesn't matter — I can think of any number of constraint enforcement triggers that just compute a boolean and which could run as either — but I find it a lot easier to imagine a scenario where the table owner wants to do something when an INSERT/UPDATE/DELETE occurs than one in which the table owner wants to make sure the role changing the table does something.Additionally, with the present behaviour, what happens when I change a table's contents is completely unpredictable. A table could have a trigger on it which drops all my tables, to take an extreme example. If “I” am postgres then this could be problematic: it’s not safe for a privileged user to make changes to the contents of another role’s tables unless they are first verified to have no triggers on them (or, in theory, that the triggers are harmless, but I’ve been playing enough Baba is You lately to consider any judgement that the triggers are harmless to be worthless without a formally verified proof of same).", "msg_date": "Tue, 16 Feb 2021 15:59:41 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Trigger execution role" }, { "msg_contents": "\nOn 2/16/21 3:59 PM, Isaac Morland wrote:\n> On Fri, 12 Feb 2021 at 12:58, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n>\n> Isaac Morland <isaac.morland@gmail.com\n> <mailto:isaac.morland@gmail.com>> writes:\n> > I was trying to use triggers, and ran into something I hadn't\n> realized\n> > until now: triggers run, not as the owner of the table, but as\n> the user who\n> > is doing the insert/update/delete.\n>\n> If you don't want that, you can make the trigger function SECURITY\n> DEFINER.  If we forced such behavior, there'd be no way to get the\n> other behavior.\n>\n>\n> I did think about SECURITY DEFINER, but that has at least a couple of\n> significant downsides:\n>\n> - can't re-use the same generic trigger function for different table\n> owners; would need to duplicate the function and use the one whose\n> owner matches the table\n> - other users could make the function a trigger for their tables and\n> then invoke it unexpectedly (i.e., in a scenario I didn’t anticipate)\n> - have to grant EXECUTE on the function to the same users that need\n> permission to change the table contents\n>\n> In what scenarios is it needed for the trigger to run as the role\n> doing the INSERT/UPDATE/DELETE? There are lots of scenarios where it\n> doesn't matter — I can think of any number of constraint enforcement\n> triggers that just compute a boolean and which could run as either —\n> but I find it a lot easier to imagine a scenario where the table owner\n> wants to do something when an INSERT/UPDATE/DELETE occurs than one in\n> which the table owner wants to make sure the role changing the table\n> does something.\n\n\nOne fairly obvious example is where the trigger is inserting audit data.\nIt needs to log the name of the user running the triggering statement\nrather than the table owner.\n\nTBH, I've used triggers very extensively over the years for a wide\nvariety of purposes and not found this to be a great issue.\n\n\n>\n> Additionally, with the present behaviour, what happens when I change a\n> table's contents is completely unpredictable. A table could have a\n> trigger on it which drops all my tables, to take an extreme example.\n> If “I” am postgres then this could be problematic: it’s not safe for a\n> privileged user to make changes to the contents of another role’s\n> tables unless they are first verified to have no triggers on them (or,\n> in theory, that the triggers are harmless, but I’ve been playing\n> enough Baba is You lately to consider any judgement that the triggers\n> are harmless to be worthless without a formally verified proof of same).\n\n\nWell, that's true of any function no matter how it's invoked. If the\nfunction is malicious it will do damage. If you suspect the database to\ncontain malicious trigger code, maybe disabling user triggers is in order.\n\nAnyway, just speculating, but maybe there is a case for allowing running\na trigger as the table owner, as part of the trigger creation. Certainly\nwe're a very long way past the time when we could reasonably change the\ndefault.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 16 Feb 2021 17:47:10 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Trigger execution role" } ]
[ { "msg_contents": "\n    I would like to share my thoughts in the list about the potential\nPostgreSQL <-> Babelfish integration. There is already a thread about\nprotocol hooks [1], but I'd like to offer my PoV from a higher level\nperspective and keep that thread for the technical aspects of the\nprotocol hooks. This is also a follow-up on a public blog post I\nrecently published [2], and the feedback I received to bring the topic\nto the ML.\n\n    As I stated in the mentioned post, I believe Babelfish is a very\nwelcomed addition to the PostgreSQL ecosystem. It allows PostgreSQL to\nreach other users, other use cases, other markets; something which in my\nopinion PostgreSQL really needs to extend its reach, to become a more\nrelevant player in the database market. The potential is there,\nspecially given all the extensibility points that PostgreSQL already\nhas, which are unparalleled in the industry.\n\n    I believe we should engage in a conversation, with AWS included,\nabout how we can possibly benefit from this integration. It must be\nsymbiotic, both \"parties\" should win with it, otherwise it won't work.\nBut I believe it can definitely be a win-win situation. There has been\nsome concerns that this may be for Amazon's own benefit, and would\nsuppose an increased maintenance burden for the PostgreSQL Community. I\nbelieve that analysis is not including the many benefits that such a\ncompatibility for PostgreSQL would bring in many fronts. And possibly,\nthe changes required to core, are beneficial for other areas of\nPostgreSQL. Several have already pointed out in the extensibility hooks\nthread that this could allow for new protocols into PostgreSQL,\nincluding the much desired v4 or an HTTP one. I can only strongly second\nthat, and we should also analyze it from this perspective.\n\n    There is also a risk factor that I believe needs to be factored into\nthe analysis, and is what is the risk of not doing anything. From my\nunderstanding, it is very clear that AWS wants to treat Babelfish as a\nkind of development branch, waiting for inclusion into mainline. But I\nalso believe, if this branch sits forever not merged, at some point, may\nbe under the risk of having its own life, becoming a fork. And if it\ndoes, it may become our \"MariaDB\". I would not like this to happen.\n\n    I'm happy to contribute what I can to this discussion: if we want\nBabelfish to be integrated, how, analyze pros and cons, etc. I see this\nas an incredible gift that, if managed properly, not only will make\nPostgreSQL much better in use-cases that cannot access now; but may also\nboost PostgreSQL's extensibility even further, and maybe even spark\ndevelopment of some projects (like v4 or HTTP protocol) that have been\nlonger dismissed because there were (logically) too many requisites for\nany v3 replacement, that made its replacement extremely hard.\n\n    But of course, these are just the humble 2 cents of a casual\n-hackers reader.\n\n\n    Álvaro\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAGBW59d5SjLyJLt-jwNv%2BoP6esbD8SCB%3D%3D%3D11WVe5%3DdOHLQ5wQ%40mail.gmail.com\n[2] https://postgresql.fund/blog/babelfish-the-elephant-in-the-room/\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n", "msg_date": "Fri, 12 Feb 2021 19:26:12 +0100", "msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>", "msg_from_op": true, "msg_subject": "PostgreSQL <-> Babelfish integration" }, { "msg_contents": "On Fri, Feb 12, 2021 at 10:26 AM Álvaro Hernández <aht@ongres.com> wrote:\n> As I stated in the mentioned post, I believe Babelfish is a very\n> welcomed addition to the PostgreSQL ecosystem. It allows PostgreSQL to\n> reach other users, other use cases, other markets; something which in my\n> opinion PostgreSQL really needs to extend its reach, to become a more\n> relevant player in the database market. The potential is there,\n> specially given all the extensibility points that PostgreSQL already\n> has, which are unparalleled in the industry.\n\nLet's assume for the sake of argument that your analysis of the\nbenefits is 100% correct -- let's take it for granted that Babelfish\nis manna from heaven. It's still not clear that it's worth embracing\nBabelfish in the way that you have advocated.\n\nWe simply don't know what the costs are. Because there is no source\ncode available. Maybe that will change tomorrow or next week, but as\nof this moment there is simply nothing substantive to evaluate.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 12 Feb 2021 10:44:31 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "\n\nOn 12/2/21 19:44, Peter Geoghegan wrote:\n> On Fri, Feb 12, 2021 at 10:26 AM Álvaro Hernández <aht@ongres.com> wrote:\n>> As I stated in the mentioned post, I believe Babelfish is a very\n>> welcomed addition to the PostgreSQL ecosystem. It allows PostgreSQL to\n>> reach other users, other use cases, other markets; something which in my\n>> opinion PostgreSQL really needs to extend its reach, to become a more\n>> relevant player in the database market. The potential is there,\n>> specially given all the extensibility points that PostgreSQL already\n>> has, which are unparalleled in the industry.\n> Let's assume for the sake of argument that your analysis of the\n> benefits is 100% correct -- let's take it for granted that Babelfish\n> is manna from heaven. It's still not clear that it's worth embracing\n> Babelfish in the way that you have advocated.\n>\n> We simply don't know what the costs are. Because there is no source\n> code available. Maybe that will change tomorrow or next week, but as\n> of this moment there is simply nothing substantive to evaluate.\n\n    I'm sure if we embrace an open and honest conversation, we will be\nable to figure out what the integration costs are even before the source\ncode gets published. As I said, this goes beyond the very technical\ndetail of source code integration. Waiting until the source code is\npublished is a bit chicken-and-egg (as I presume the source will morph\ntowards convergence if there's work that may be started, even if it is\njust for example for protocol extensibility).\n\n    I'm sure this can be also discussed at an architectural level,\ngetting an analysis of what parts of PostgreSQL would be changed, what\nextension mechanisms are required, what is the volume of the project,\nand many others.\n\n    Álvaro\n\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n", "msg_date": "Fri, 12 Feb 2021 20:04:18 +0100", "msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "On Fri, 12 Feb 2021 at 19:44, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Feb 12, 2021 at 10:26 AM Álvaro Hernández <aht@ongres.com> wrote:\n> > As I stated in the mentioned post, I believe Babelfish is a very\n> > welcomed addition to the PostgreSQL ecosystem. It allows PostgreSQL to\n> > reach other users, other use cases, other markets; something which in my\n> > opinion PostgreSQL really needs to extend its reach, to become a more\n> > relevant player in the database market. The potential is there,\n> > specially given all the extensibility points that PostgreSQL already\n> > has, which are unparalleled in the industry.\n>\n> Let's assume for the sake of argument that your analysis of the\n> benefits is 100% correct -- let's take it for granted that Babelfish\n> is manna from heaven. It's still not clear that it's worth embracing\n> Babelfish in the way that you have advocated.\n>\n> We simply don't know what the costs are. Because there is no source\n> code available. Maybe that will change tomorrow or next week, but as\n> of this moment there is simply nothing substantive to evaluate.\n\nI agree. I believe that Babelfish's efforts can be compared with the\nzedstore and zheap efforts: they require work in core before they can\nbe integrated or added as an extension that could replace the normal\nheap tableam, and while core is being prepared we can discover what\ncan and cannot be prepared in core for this new feature. But as long\nas there is no information about what structural updates in core would\nbe required, no commitment can be made for inclusion. And although I\nwould agree that an extension system for custom protocols and parsers\nwould be interesting, I think it would be putting the cart before the\nhorse if you want to force a decision 4 years ahead of time [0],\nwithout ever having seen the code or even a design document.\n\nIn general, I think postgres could indeed benefit from a pluggable\nprotocol and dialect frontend, but as long as there are no public and\nopen projects that demonstrate the benefits or would provide a guide\nfor implementing such frontend, I see no reason for the postgres\nproject to put work into such a feature.\n\nWith regards,\n\nMatthias van de Meent\n\n[0] I believe this is an optimistic guess, based on the changes that\nwere (and are yet still) required for the zedstore and/or zheap\ntableam, but am happy to be proven wrong.\n\n\n", "msg_date": "Fri, 12 Feb 2021 20:13:26 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "On Fri, Feb 12, 2021 at 11:04 AM Álvaro Hernández <aht@ongres.com> wrote:\n> I'm sure if we embrace an open and honest conversation, we will be\n> able to figure out what the integration costs are even before the source\n> code gets published. As I said, this goes beyond the very technical\n> detail of source code integration.\n\nPerhaps that's true in one sense, but if the cost of integrating\nBabelfish is prohibitive then it's still not going to go anywhere. If\nit did happen then that would certainly involve at least one or two\nsenior community members that personally adopt it. That's our model\nfor everything, to some degree because there is no other way that it\ncould work. It's very bottom-up.\n\nFor better or worse, very high level discussion like this has always\nfollowed from code, not the other way around. We don't really have the\nability or experience to do it any other way IMO.\n\n> Waiting until the source code is\n> published is a bit chicken-and-egg (as I presume the source will morph\n> towards convergence if there's work that may be started, even if it is\n> just for example for protocol extensibility).\n\nWell, the priorities of Postgres development are not set in any fixed\nway (except to the limited extent that you're on the hook for anything\nyou integrate that breaks). I myself am not convinced that this is\nworth spending any time on right now, especially given the lack of\ncode to evaluate.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 12 Feb 2021 11:31:22 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "On Fri, Feb 12, 2021 at 11:13 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I agree. I believe that Babelfish's efforts can be compared with the\n> zedstore and zheap efforts: they require work in core before they can\n> be integrated or added as an extension that could replace the normal\n> heap tableam, and while core is being prepared we can discover what\n> can and cannot be prepared in core for this new feature.\n\nI see what you mean, but even that seems generous to me, since, as I\nsaid, we don't have any Babelfish code to evaluate today. Whereas\nZedstore and zheap can actually be downloaded and tested.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 12 Feb 2021 11:34:13 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "Just wanted to link to the discussion on this from HN for anyone intersted:\nhttps://news.ycombinator.com/item?id=26114281\n\nJust wanted to link to the discussion on this from HN for anyone intersted: https://news.ycombinator.com/item?id=26114281", "msg_date": "Fri, 12 Feb 2021 17:35:18 -0500", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "We are applying the Babelfish commits to the REL_12_STABLE branch now, and the plan is to merge them into the REL_13_STABLE and master branch ASAP after that. There should be a publicly downloadable git repository before very long.\r\n\r\nOn 2/12/21, 2:35 PM, \"Peter Geoghegan\" <pg@bowt.ie> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n On Fri, Feb 12, 2021 at 11:13 AM Matthias van de Meent\r\n <boekewurm+postgres@gmail.com> wrote:\r\n > I agree. I believe that Babelfish's efforts can be compared with the\r\n > zedstore and zheap efforts: they require work in core before they can\r\n > be integrated or added as an extension that could replace the normal\r\n > heap tableam, and while core is being prepared we can discover what\r\n > can and cannot be prepared in core for this new feature.\r\n\r\n I see what you mean, but even that seems generous to me, since, as I\r\n said, we don't have any Babelfish code to evaluate today. Whereas\r\n Zedstore and zheap can actually be downloaded and tested.\r\n\r\n --\r\n Peter Geoghegan\r\n\r\n\r\n\r\n", "msg_date": "Mon, 15 Feb 2021 16:01:16 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" }, { "msg_contents": "On Mon, 15 Feb 2021 at 17:01, Finnerty, Jim <jfinnert@amazon.com> wrote:\n>\n> We are applying the Babelfish commits to the REL_12_STABLE branch now, and the plan is to merge them into the REL_13_STABLE and master branch ASAP after that. There should be a publicly downloadable git repository before very long.\n\nHi,\n\nOut of curiosity, are you able to share the status on the publication\nof this repository?\n\nI mainly ask this because I haven't seen any announcements from Amazon\n/ AWS regarding the publication of the Babelfish project since the\nstart of this thread, and the relevant websites [0][1][2] also do not\nappear to have seen an update. The last mention of babelfish in a\nthread here on -hackers also only seem to date back to late March.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://babelfish-for-postgresql.github.io/babelfish-for-postgresql/\n[1] https://aws.amazon.com/rds/aurora/babelfish/\n[2] https://github.com/babelfish-for-postgresql/\n\n\n", "msg_date": "Wed, 25 Aug 2021 17:28:54 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL <-> Babelfish integration" } ]
[ { "msg_contents": "Hello,\n\nI am working on a project where I do not want Postgres to reuse free space\nin old pages (see\nhttps://www.postgresql.org/message-id/flat/CABjy%2BRhbFu_Hs8ZEiOzaPaJSGB9jqFF0gDU5gtwCLiurG3NLjQ%40mail.gmail.com\nfor details). I found that the HEAP_INSERT_SKIP_FSM flag accomplishes this.\nFor a long-term solution I see two options:\n\n1. Introduce a reloption for this.\n2. Implement it as a custom table access method in an extension.\n\nAs an experiment, I have created an extension which forwards all table\naccess functions to the builtin heap access method, but enables the\nHEAP_INSERT_SKIP_FSM flag for heap_insert and heap_multi_insert. However,\nthe check in heap_getnext (\nhttps://github.com/postgres/postgres/blob/REL_12_5/src/backend/access/heap/heapam.c#L1294-L1304)\nis a showstopper. Because the custom access method uses a different\nfunction table (so that I can override heap_insert and heap_multi_insert),\nheap_getnext errors out with \"only heap AM is supported\". I am currently\nhacking around this problem by duplicating all code up to and including\nheap_getnext, with this check commented out. Clearly this is not ideal, as\nchanges to the heap code in future updates might cause incompatibilities.\n\nAny ideas on how to proceed with this issue?\n\nThank you,\nNoah Bergbauer\n\nHello,I am working on a project where I do not want Postgres to reuse free space in old pages (see https://www.postgresql.org/message-id/flat/CABjy%2BRhbFu_Hs8ZEiOzaPaJSGB9jqFF0gDU5gtwCLiurG3NLjQ%40mail.gmail.com for details). I found that the HEAP_INSERT_SKIP_FSM flag accomplishes this. For a long-term solution I see two options:1. Introduce a reloption for this.2. Implement it as a custom table access method in an extension.As an experiment, I have created an extension which forwards all table access functions to the builtin heap access method, but enables the HEAP_INSERT_SKIP_FSM flag for heap_insert and heap_multi_insert. However, the check in heap_getnext (https://github.com/postgres/postgres/blob/REL_12_5/src/backend/access/heap/heapam.c#L1294-L1304) is a showstopper. Because the custom access method uses a different function table (so that I can override heap_insert and heap_multi_insert), heap_getnext errors out with \"only heap AM is supported\". I am currently hacking around this problem by duplicating all code up to and including heap_getnext, with this check commented out. Clearly this is not ideal, as changes to the heap code in future updates might cause incompatibilities.Any ideas on how to proceed with this issue?Thank you,Noah Bergbauer", "msg_date": "Fri, 12 Feb 2021 22:05:55 +0100", "msg_from": "Noah Bergbauer <noah@statshelix.com>", "msg_from_op": true, "msg_subject": "Preventing free space from being reused" }, { "msg_contents": "Noah Bergbauer <noah@statshelix.com> writes:\n> I am working on a project where I do not want Postgres to reuse free space\n> in old pages (see\n> https://www.postgresql.org/message-id/flat/CABjy%2BRhbFu_Hs8ZEiOzaPaJSGB9jqFF0gDU5gtwCLiurG3NLjQ%40mail.gmail.com\n> for details). I found that the HEAP_INSERT_SKIP_FSM flag accomplishes this.\n> For a long-term solution I see two options:\n> 1. Introduce a reloption for this.\n> 2. Implement it as a custom table access method in an extension.\n\nTBH, I can't believe that this is actually a good idea. If we introduce\na reloption that does that, we'll just be getting users complaining about\ntable bloat ... but probably only after they get to a state where it's\ngoing to be horribly painful to get out of.\n\n(My reaction to your previous thread was that it was simply a question\nof blindly insisting on using BRIN indexes for a case that they're quite\nbadly adapted to. The better answer is to not use BRIN.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Feb 2021 16:43:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Preventing free space from being reused" }, { "msg_contents": "> (My reaction to your previous thread was that it was simply a question\n> of blindly insisting on using BRIN indexes for a case that they're quite\n> badly adapted to. The better answer is to not use BRIN.)\n\nApologies, perhaps I am completely misunderstanding the motivation for BRIN?\n\n From the docs:\n>BRIN is designed for handling very large tables in which certain columns\nhave some natural correlation with their physical location within the table.\n>[...]\n>a table storing a store's sale orders might have a date column on which\neach order was placed, and most of the time the entries for earlier orders\nwill appear earlier in the table\n\nMy table is very large, and the column in question has a strong natural\ncorrelation with each tuple's physical location. It is, in fact, a date\ncolumn where entries with earlier timestamps will appear earlier in the\ntable. To be honest, if this isn't a use case for BRIN, then I don't know\nwhat is. The only exception to this is a small proportion of tuples which\nare slotted into random older pages due to their small size.\n\nA btree index on the same column is 700x the size of BRIN, or 10% of\nrelation itself. It does not perform significantly better than BRIN. The\nissue here is twofold: not only does slotting these tuples into older pages\nsignificantly reduce the effectiveness of BRIN, it also causes\nfragmentation on disk. Ultimately, this is why CLUSTER exists. One way to\nlook at this situation is that my data is inserted exactly in index order,\nbut Postgres keeps un-clustering it for reasons that are valid in general\n(don't waste disk space) but don't apply at all in this case (the file\nsystem uses compression, no space is wasted).\n\nAny alternative ideas would of course be much appreciated! But at the\nmoment HEAP_INSERT_SKIP_FSM seems like the most practical solution to me.\n\n\n\n\nOn Fri, Feb 12, 2021 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Noah Bergbauer <noah@statshelix.com> writes:\n> > I am working on a project where I do not want Postgres to reuse free\n> space\n> > in old pages (see\n> >\n> https://www.postgresql.org/message-id/flat/CABjy%2BRhbFu_Hs8ZEiOzaPaJSGB9jqFF0gDU5gtwCLiurG3NLjQ%40mail.gmail.com\n> > for details). I found that the HEAP_INSERT_SKIP_FSM flag accomplishes\n> this.\n> > For a long-term solution I see two options:\n> > 1. Introduce a reloption for this.\n> > 2. Implement it as a custom table access method in an extension.\n>\n> TBH, I can't believe that this is actually a good idea. If we introduce\n> a reloption that does that, we'll just be getting users complaining about\n> table bloat ... but probably only after they get to a state where it's\n> going to be horribly painful to get out of.\n>\n> (My reaction to your previous thread was that it was simply a question\n> of blindly insisting on using BRIN indexes for a case that they're quite\n> badly adapted to. The better answer is to not use BRIN.)\n>\n> regards, tom lane\n>\n\n>\n(My reaction to your previous thread was that it was simply a question> of blindly insisting on using BRIN indexes for a case that they're quite> badly adapted to.  The better answer is to not use BRIN.)Apologies, perhaps I am completely misunderstanding the motivation for BRIN?From the docs:>BRIN is designed for handling very \nlarge tables in which certain columns have some natural correlation with\n their physical location within the table.>[...]>a table storing a store's sale orders might have a date column on \nwhich each order was placed, and most of the time the entries for \nearlier orders will appear earlier in the tableMy table is very large, and the column in question has a strong natural correlation with each tuple's physical location. It is, in fact, a date column where entries with earlier timestamps will appear earlier in the table. To be honest, if this isn't a use case for BRIN, then I don't know what is. The only exception to this is a small proportion of tuples which are slotted into random older pages due to their small size.A btree index on the same column is 700x the size of BRIN, or 10% of relation itself. It does not perform significantly better than BRIN. The issue here is twofold: not only does slotting these tuples into older pages significantly reduce the effectiveness of BRIN, it also causes fragmentation on disk. Ultimately, this is why CLUSTER exists. One way to look at this situation is that my data is inserted exactly in index order, but Postgres keeps un-clustering it for reasons that are valid in general (don't waste disk space) but don't apply at all in this case (the file system uses compression, no space is wasted).Any alternative ideas would of course be much appreciated! But at the moment HEAP_INSERT_SKIP_FSM seems like the most practical solution to me.On Fri, Feb 12, 2021 at 10:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Noah Bergbauer <noah@statshelix.com> writes:\n> I am working on a project where I do not want Postgres to reuse free space\n> in old pages (see\n> https://www.postgresql.org/message-id/flat/CABjy%2BRhbFu_Hs8ZEiOzaPaJSGB9jqFF0gDU5gtwCLiurG3NLjQ%40mail.gmail.com\n> for details). I found that the HEAP_INSERT_SKIP_FSM flag accomplishes this.\n> For a long-term solution I see two options:\n> 1. Introduce a reloption for this.\n> 2. Implement it as a custom table access method in an extension.\n\nTBH, I can't believe that this is actually a good idea.  If we introduce\na reloption that does that, we'll just be getting users complaining about\ntable bloat ... but probably only after they get to a state where it's\ngoing to be horribly painful to get out of.\n\n(My reaction to your previous thread was that it was simply a question\nof blindly insisting on using BRIN indexes for a case that they're quite\nbadly adapted to.  The better answer is to not use BRIN.)\n\n                        regards, tom lane", "msg_date": "Fri, 12 Feb 2021 23:21:28 +0100", "msg_from": "Noah Bergbauer <noah@statshelix.com>", "msg_from_op": true, "msg_subject": "Re: Preventing free space from being reused" }, { "msg_contents": "On Fri, Feb 12, 2021 at 6:21 PM Noah Bergbauer <noah@statshelix.com> wrote:\n>\n> A btree index on the same column is 700x the size of BRIN, or 10% of\nrelation itself. It does not perform significantly better than BRIN. The\nissue here is twofold: not only does slotting these tuples into older pages\nsignificantly reduce the effectiveness of BRIN, it also causes\nfragmentation on disk. Ultimately, this is why CLUSTER exists. One way to\nlook at this situation is that my data is inserted exactly in index order,\nbut Postgres keeps un-clustering it for reasons that are valid in general\n(don't waste disk space) but don't apply at all in this case (the file\nsystem uses compression, no space is wasted).\n>\n> Any alternative ideas would of course be much appreciated! But at the\nmoment HEAP_INSERT_SKIP_FSM seems like the most practical solution to me.\n\nI would suggest to take a look at the BRIN opclass multi-minmax currently\nin development. It's designed to address that exact situation, and more\nreview would be welcome:\n\nhttps://commitfest.postgresql.org/32/2523/\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Feb 12, 2021 at 6:21 PM Noah Bergbauer <noah@statshelix.com> wrote:>> A btree index on the same column is 700x the size of BRIN, or 10% of relation itself. It does not perform significantly better than BRIN. The issue here is twofold: not only does slotting these tuples into older pages significantly reduce the effectiveness of BRIN, it also causes fragmentation on disk. Ultimately, this is why CLUSTER exists. One way to look at this situation is that my data is inserted exactly in index order, but Postgres keeps un-clustering it for reasons that are valid in general (don't waste disk space) but don't apply at all in this case (the file system uses compression, no space is wasted).>> Any alternative ideas would of course be much appreciated! But at the moment HEAP_INSERT_SKIP_FSM seems like the most practical solution to me.I would suggest to take a look at the BRIN opclass multi-minmax currently in development. It's designed to address that exact situation, and more review would be welcome:https://commitfest.postgresql.org/32/2523/--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sat, 13 Feb 2021 08:36:10 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Preventing free space from being reused" }, { "msg_contents": ">I would suggest to take a look at the BRIN opclass multi-minmax currently\nin development.\n\nThank you, this does look like it could help a lot with BRIN performance in\nthis situation!\n\nBut again, if index performance alone was the only issue, then I would\nsimply accept the space overhead and switch to btree. However, the disk\nfragmentation issue still remains and is significant. It is also amplified\nin my use case due to using ZFS, mostly for compression. But it is worth\nit: I am currently observing a 13x compression ratio (when comparing disk\nspace reported by du and select sum(octet_length(x)), so this does not\ninclude the false gains from compressing padding). But in general, any\nvariable-sized append-only workload suffers from this fragmentation\nproblem. It's just that with filesystem compression, there is no longer a\ngood reason to fill up those holes and accept the fragmentation.\n\nTo be clear, the main reason why I even brought my questions to this\nmailing list is that I don't know how to (correctly) get past the check in\nheap_getnext (see my first email) when implementing the workaround as a\ncustom table access method. A reloption could theoretically disable free\nspace maps entirely for some added efficiency, but I'm inclined to agree\nthat this is not really needed.\n\n\n\nOn Sat, Feb 13, 2021 at 1:36 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> On Fri, Feb 12, 2021 at 6:21 PM Noah Bergbauer <noah@statshelix.com>\n> wrote:\n> >\n> > A btree index on the same column is 700x the size of BRIN, or 10% of\n> relation itself. It does not perform significantly better than BRIN. The\n> issue here is twofold: not only does slotting these tuples into older pages\n> significantly reduce the effectiveness of BRIN, it also causes\n> fragmentation on disk. Ultimately, this is why CLUSTER exists. One way to\n> look at this situation is that my data is inserted exactly in index order,\n> but Postgres keeps un-clustering it for reasons that are valid in general\n> (don't waste disk space) but don't apply at all in this case (the file\n> system uses compression, no space is wasted).\n> >\n> > Any alternative ideas would of course be much appreciated! But at the\n> moment HEAP_INSERT_SKIP_FSM seems like the most practical solution to me.\n>\n> I would suggest to take a look at the BRIN opclass multi-minmax currently\n> in development. It's designed to address that exact situation, and more\n> review would be welcome:\n>\n> https://commitfest.postgresql.org/32/2523/\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\n>I would suggest to take a look at the BRIN opclass multi-minmax currently in development.Thank you, this does look like it could help a lot with BRIN performance in this situation!But again, if index performance alone was the only issue, then I would simply accept the space overhead and switch to btree. However, the disk fragmentation issue still remains and is significant. It is also amplified in my use case due to using ZFS, mostly for compression. But it is worth it: I am currently observing a 13x compression ratio (when comparing disk space reported by du and select sum(octet_length(x)), so this does not include the false gains from compressing padding). But in general, any variable-sized append-only workload suffers from this fragmentation problem. It's just that with filesystem compression, there is no longer a good reason to fill up those holes and accept the fragmentation.To be clear, the main reason why I even brought my questions to this mailing list is that I don't know how to (correctly) get past the check in heap_getnext (see my first email) when implementing the workaround as a custom table access method. A reloption could theoretically disable free space maps entirely for some added efficiency, but I'm inclined to agree that this is not really needed.On Sat, Feb 13, 2021 at 1:36 PM John Naylor <john.naylor@enterprisedb.com> wrote:On Fri, Feb 12, 2021 at 6:21 PM Noah Bergbauer <noah@statshelix.com> wrote:>> A btree index on the same column is 700x the size of BRIN, or 10% of relation itself. It does not perform significantly better than BRIN. The issue here is twofold: not only does slotting these tuples into older pages significantly reduce the effectiveness of BRIN, it also causes fragmentation on disk. Ultimately, this is why CLUSTER exists. One way to look at this situation is that my data is inserted exactly in index order, but Postgres keeps un-clustering it for reasons that are valid in general (don't waste disk space) but don't apply at all in this case (the file system uses compression, no space is wasted).>> Any alternative ideas would of course be much appreciated! But at the moment HEAP_INSERT_SKIP_FSM seems like the most practical solution to me.I would suggest to take a look at the BRIN opclass multi-minmax currently in development. It's designed to address that exact situation, and more review would be welcome:https://commitfest.postgresql.org/32/2523/--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 15 Feb 2021 01:42:19 +0100", "msg_from": "Noah Bergbauer <noah@statshelix.com>", "msg_from_op": true, "msg_subject": "Re: Preventing free space from being reused" } ]
[ { "msg_contents": "Hello,\n\nAs a very simple exploration of the possible gains from batching redo\nrecords during replay, I tried to avoid acquiring and releasing\nbuffers pins and locks while replaying records that touch the same\npage as the previous record. The attached experiment-grade patch\nworks by trying to give a locked buffer to the next redo handler,\nwhich then releases it if it wants a different buffer. Crash recovery\non my dev machine went from 62s to 34s (1.8x speedup) for:\n\n create table t (i int, foo text);\n insert into t select generate_series(1, 50000000), 'the quick brown\nfox jumped over the lazy dog';\n delete from t;\n\nOf course that workload was contrived to produce a suitable WAL\nhistory for this demo. The patch doesn't help more common histories\nfrom the real world, involving (non-HOT) UPDATEs and indexes etc,\nbecause then you have various kinds of interleaving that defeat this\nsimple-minded optimisation. To get a more general improvement, it\nseems that we'd need a smarter redo loop that could figure out what\ncan safely be reordered to maximise the page-level batching and\nlocality effects. I haven't studied the complications of reordering\nyet, and I'm not working on that for PostgreSQL 14, but I wanted to\nsee if others have thoughts about it. The WAL prefetching patch that\nI am planning to get into 14 opens up these possibilities by decoding\nmany records into a circular WAL decode buffer, so you can see a whole\nchain of them at once.", "msg_date": "Sat, 13 Feb 2021 10:41:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Experimenting with redo batching" } ]
[ { "msg_contents": "Greetings!\n\nI would like to know if there is a better way to pass a relation or if the\nrelation name (CString) as a parameter in a C function and thus be able to\nmanipulate its tuples. The documentation is available here:\nhttps://www.postgresql.org/docs/13/xfunc-c.html#id-1.8.3.13.11. But it is\nnot quite clear enough on how to retrieve tuples. The handling of these is\nquite clear. The only function I'm currently using (but not working) is the\nTupleDesc TypeGetTupleDesc (Oid typeoid, List * colaliases) function. Do we\nhave a function like TupleDesc RelationNameGetTupleDesc (const char *\nrelname) (Old and deprecated)?\n\nregards,\n\n\n*Andjasubu Bungama, Patrick *\n\nGreetings!I would like to know if there is a better way to pass a relation or if the relation name (CString) as a parameter in a C function and thus be able to manipulate its tuples. The documentation is available here: https://www.postgresql.org/docs/13/xfunc-c.html#id-1.8.3.13.11. But it is not quite clear enough on how to retrieve tuples. The handling of these is quite clear. The only function I'm currently using (but not working) is the TupleDesc TypeGetTupleDesc (Oid typeoid, List * colaliases) function. Do we have a function like TupleDesc RelationNameGetTupleDesc (const char * relname) (Old and deprecated)?regards,Andjasubu Bungama, Patrick", "msg_date": "Sat, 13 Feb 2021 14:54:39 -0500", "msg_from": "Patrick Handja <patrick.bungama@gmail.com>", "msg_from_op": true, "msg_subject": "How to get Relation tuples in C function" }, { "msg_contents": "Patrick Handja <patrick.bungama@gmail.com> writes:\n> I would like to know if there is a better way to pass a relation or if the\n> relation name (CString) as a parameter in a C function and thus be able to\n> manipulate its tuples. The documentation is available here:\n> https://www.postgresql.org/docs/13/xfunc-c.html#id-1.8.3.13.11. But it is\n> not quite clear enough on how to retrieve tuples.\n\nThe thing I'd recommend you do is use SPI [1], which lets you execute\nSQL queries from inside a C function. If you don't want to do that\nfor whatever reason, you need to open the relation, set up a scan,\nand fetch tuples from the scan, relying on low-level APIs that tend\nto change from version to version. contrib/pageinspect or\ncontrib/pgstattuple might offer usable sample code, although with any\nprototype you might look at, it's going to be hard to see the forest\nfor the trees.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/spi.html\n\n\n", "msg_date": "Sat, 13 Feb 2021 16:23:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to get Relation tuples in C function" }, { "msg_contents": "On Sun, Feb 14, 2021 at 5:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Patrick Handja <patrick.bungama@gmail.com> writes:\n> > I would like to know if there is a better way to pass a relation or if\n> the\n> > relation name (CString) as a parameter in a C function and thus be able\n> to\n> > manipulate its tuples. The documentation is available here:\n> > https://www.postgresql.org/docs/13/xfunc-c.html#id-1.8.3.13.11. But it\n> is\n> > not quite clear enough on how to retrieve tuples.\n>\n> The thing I'd recommend you do is use SPI [1], which lets you execute\n> SQL queries from inside a C function. If you don't want to do that\n> for whatever reason, you need to open the relation, set up a scan,\n> and fetch tuples from the scan, relying on low-level APIs that tend\n> to change from version to version. contrib/pageinspect or\n> contrib/pgstattuple might offer usable sample code, although with any\n> prototype you might look at, it's going to be hard to see the forest\n> for the trees.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/docs/current/spi.html\n>\n>\n> Thank you tom for the reply. What would be the difference between the\nSPI and \"write a pure SQL UDF\" and call it with DirectFunctionCall1? I\njust ran into a similar situation some days before. Currently I think\nDirectFunctionCall1 doesn't need to maintain a connection but SPI has to\ndo that.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Sun, Feb 14, 2021 at 5:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Patrick Handja <patrick.bungama@gmail.com> writes:\n> I would like to know if there is a better way to pass a relation or if the\n> relation name (CString) as a parameter in a C function and thus be able to\n> manipulate its tuples. The documentation is available here:\n> https://www.postgresql.org/docs/13/xfunc-c.html#id-1.8.3.13.11. But it is\n> not quite clear enough on how to retrieve tuples.\n\nThe thing I'd recommend you do is use SPI [1], which lets you execute\nSQL queries from inside a C function.  If you don't want to do that\nfor whatever reason, you need to open the relation, set up a scan,\nand fetch tuples from the scan, relying on low-level APIs that tend\nto change from version to version.  contrib/pageinspect or\ncontrib/pgstattuple might offer usable sample code, although with any\nprototype you might look at, it's going to be hard to see the forest\nfor the trees.\n\n                        regards, tom lane\n\n[1] https://www.postgresql.org/docs/current/spi.html\n\nThank you tom for the reply.  What would be the difference between theSPI and \"write a pure SQL UDF\" and call it with DirectFunctionCall1? Ijust ran into a similar situation some days before.   Currently I think DirectFunctionCall1 doesn't need to maintain a connection but SPI has todo that.-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 14 Feb 2021 09:29:08 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to get Relation tuples in C function" }, { "msg_contents": "On Sun, Feb 14, 2021 at 09:29:08AM +0800, Andy Fan wrote:\n> Thank you tom for the reply. What would be the difference between the\n> SPI and \"write a pure SQL UDF\" and call it with DirectFunctionCall1? I\n> just ran into a similar situation some days before. Currently I think\n> DirectFunctionCall1 doesn't need to maintain a connection but SPI has to\n> do that.\n\nHard to say without knowing your use case. A PL function is more\nsimple to maintain than a C function, though usually less performant\nfrom the pure point of view of its operations. A SQL function could\nfinish by being inlined, allowing the planner to apply optimizations\nas it would know the function body. Going with SPI has the advantage\nto have code able to work without any changes across major versions,\nwhich is a no-brainer when it comes to long-term maintenance.\n--\nMichael", "msg_date": "Sun, 14 Feb 2021 20:56:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: How to get Relation tuples in C function" }, { "msg_contents": "On Sun, Feb 14, 2021 at 7:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Feb 14, 2021 at 09:29:08AM +0800, Andy Fan wrote:\n> > Thank you tom for the reply. What would be the difference between the\n> > SPI and \"write a pure SQL UDF\" and call it with DirectFunctionCall1? I\n> > just ran into a similar situation some days before. Currently I think\n> > DirectFunctionCall1 doesn't need to maintain a connection but SPI has to\n> > do that.\n>\n> Hard to say without knowing your use case. A PL function is more\n> simple to maintain than a C function, though usually less performant\n> from the pure point of view of its operations. A SQL function could\n> finish by being inlined, allowing the planner to apply optimizations\n> as it would know the function body. Going with SPI has the advantage\n> to have code able to work without any changes across major versions,\n> which is a no-brainer when it comes to long-term maintenance.\n> --\n> Michael\n>\n\nThank you Michael for the response.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Sun, Feb 14, 2021 at 7:56 PM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Feb 14, 2021 at 09:29:08AM +0800, Andy Fan wrote:\n> Thank you tom for the reply.  What would be the difference between the\n> SPI and \"write a pure SQL UDF\" and call it with DirectFunctionCall1? I\n> just ran into a similar situation some days before.   Currently I think\n> DirectFunctionCall1 doesn't need to maintain a connection but SPI has to\n> do that.\n\nHard to say without knowing your use case.  A PL function is more\nsimple to maintain than a C function, though usually less performant\nfrom the pure point of view of its operations.  A SQL function could\nfinish by being inlined, allowing the planner to apply optimizations\nas it would know the function body.  Going with SPI has the advantage\nto have code able to work without any changes across major versions,\nwhich is a no-brainer when it comes to long-term maintenance.\n--\nMichael\nThank you Michael for the response.-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 17 Feb 2021 09:07:50 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to get Relation tuples in C function" } ]
[ { "msg_contents": "Hi,\n\nguc.c: In function ‘RestoreGUCState’:\nguc.c:9455:4: error: ‘varsourceline’ may be used uninitialized in this\nfunction [-Werror=maybe-uninitialized]\n 9455 | set_config_sourcefile(varname, varsourcefile, varsourceline);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nI propose the attached.", "msg_date": "Mon, 15 Feb 2021 14:15:51 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "GCC warning in back branches" }, { "msg_contents": "On Mon, Feb 15, 2021 at 02:15:51PM +1300, Thomas Munro wrote:\n> guc.c: In function ‘RestoreGUCState’:\n> guc.c:9455:4: error: ‘varsourceline’ may be used uninitialized in this\n> function [-Werror=maybe-uninitialized]\n> 9455 | set_config_sourcefile(varname, varsourcefile, varsourceline);\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \n> I propose the attached.\n\nWe usually don't bother much about compilation warnings in stable\nbranches as long as they are not real bugs, and these are the oldest\nstable ones. So why here? I would have patched the top of the\nfunction if it were me, btw.\n--\nMichael", "msg_date": "Mon, 15 Feb 2021 10:34:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GCC warning in back branches" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Feb 15, 2021 at 02:15:51PM +1300, Thomas Munro wrote:\n>> I propose the attached.\n\n> We usually don't bother much about compilation warnings in stable\n> branches as long as they are not real bugs, and these are the oldest\n> stable ones. So why here? I would have patched the top of the\n> function if it were me, btw.\n\nIf somebody were running a buildfarm member with recent gcc\nand -Werror, we'd pretty much have to fix it.\n\nI'd say the real policy is that we don't worry about\nuninitialized-variable warnings from old compiler versions,\non the theory that they're probably compiler shortcomings.\nBut I'd be inclined to fix anything from a current gcc version.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Feb 2021 20:41:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GCC warning in back branches" }, { "msg_contents": "On Mon, Feb 15, 2021 at 2:35 PM Michael Paquier <michael@paquier.xyz> wrote:\n> ... I would have patched the top of the\n> function if it were me, btw.\n\nI just copied the way it is coded in master (due to commit fbb2e9a0\nwhich fixed this warning in 11+).\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:59:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GCC warning in back branches" } ]
[ { "msg_contents": "The call to heap_page_prune() within lazy_scan_heap() passes a bool\nliteral ('false') as its fourth argument. But the fourth argument is\nof type TransactionId, not bool. This has been the case since the\nsnapshot scalability work performed by commit dc7420c2c92. Surely\nsomething is amiss here.\n\nI also notice some inconsistencies in the heap_page_prune() prototype\nnames vs the corresponding definition names. Might be worth doing\nsomething about in passing.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Feb 2021 18:42:18 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Snapshot scalability patch issue" }, { "msg_contents": "Hi,\n\nOn 2021-02-14 18:42:18 -0800, Peter Geoghegan wrote:\n> The call to heap_page_prune() within lazy_scan_heap() passes a bool\n> literal ('false') as its fourth argument. But the fourth argument is\n> of type TransactionId, not bool. This has been the case since the\n> snapshot scalability work performed by commit dc7420c2c92. Surely\n> something is amiss here.\n\nLooks like I accidentally swapped the InvalidTransactionId and false\naround - luckily they have the same actual bit pattern...\n\nI do wish C could pass arguments by name.\n\nI'll push something once I'm back at my computer...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Feb 2021 15:08:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Snapshot scalability patch issue" }, { "msg_contents": "Hi,\n\nOn 2021-02-15 15:08:40 -0800, Andres Freund wrote:\n> On 2021-02-14 18:42:18 -0800, Peter Geoghegan wrote:\n> > The call to heap_page_prune() within lazy_scan_heap() passes a bool\n> > literal ('false') as its fourth argument. But the fourth argument is\n> > of type TransactionId, not bool. This has been the case since the\n> > snapshot scalability work performed by commit dc7420c2c92. Surely\n> > something is amiss here.\n> \n> Looks like I accidentally swapped the InvalidTransactionId and false\n> around - luckily they have the same actual bit pattern...\n> \n> I do wish C could pass arguments by name.\n> \n> I'll push something once I'm back at my computer...\n\nDone. Thanks for noticing/reporting!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:30:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Snapshot scalability patch issue" }, { "msg_contents": "On Mon, Feb 15, 2021 at 5:30 PM Andres Freund <andres@anarazel.de> wrote:\n> Done. Thanks for noticing/reporting!\n\nGreat, thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:30:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Snapshot scalability patch issue" } ]
[ { "msg_contents": "hi,\n\nIt turns out parallel_workers may be a useful reloption for certain uses of partitioned tables, at least if they're made up of fancy column store partitions (see https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).\n\nWould somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:\n\ndiff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\nindex cd3fdd259c..f1ade035ac 100644\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -3751,6 +3751,7 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages,\n \t * If the user has set the parallel_workers reloption, use that; otherwise\n \t * select a default number of workers.\n \t */\n+\t// I want to affect this\n \tif (rel->rel_parallel_workers != -1)\n \t\tparallel_workers = rel->rel_parallel_workers;\n \telse\n\nso I do this\n\ndiff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c\nindex c687d3ee9e..597b209bfb 100644\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -1961,13 +1961,15 @@ build_local_reloptions(local_relopts *relopts, Datum options, bool validate)\n bytea *\n partitioned_table_reloptions(Datum reloptions, bool validate)\n {\n-\t/*\n-\t * There are no options for partitioned tables yet, but this is able to do\n-\t * some validation.\n-\t */\n+\tstatic const relopt_parse_elt tab[] = {\n+\t\t{\"parallel_workers\", RELOPT_TYPE_INT,\n+\t\toffsetof(StdRdOptions, parallel_workers)},\n+\t};\n+\n \treturn (bytea *) build_reloptions(reloptions, validate,\n \t\t\t\t\t\t\t\t\t RELOPT_KIND_PARTITIONED,\n-\t\t\t\t\t\t\t\t\t 0, NULL, 0);\n+\t\t\t\t\t\t\t\t\t sizeof(StdRdOptions),\n+\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n }\n\nThat \"works\":\n\npostgres=# alter table test_3pd_cstore_partitioned set (parallel_workers = 33);\nALTER TABLE\npostgres=# select relname, relkind, reloptions from pg_class where relname = 'test_3pd_cstore_partitioned';\n relname | relkind | reloptions \n-----------------------------+---------+-----------------------\n test_3pd_cstore_partitioned | p | {parallel_workers=33}\n(1 row)\n\nBut it seems to be ignored:\n\ndiff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\nindex cd3fdd259c..c68835ce38 100644\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -3751,6 +3751,8 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages,\n \t * If the user has set the parallel_workers reloption, use that; otherwise\n \t * select a default number of workers.\n \t */\n+\t// I want to affect this, but this assertion always passes\n+\tAssert(rel->rel_parallel_workers == -1)\n \tif (rel->rel_parallel_workers != -1)\n \t\tparallel_workers = rel->rel_parallel_workers;\n \telse\n\nThanks and please forgive my code pasting etiquette as this is my first post to pgsql-hackers and I'm not quite sure what the right format is.\n\nThank you,\nSeamus\n\n\n", "msg_date": "Sun, 14 Feb 2021 22:15:04 -0500", "msg_from": "\"Seamus Abshere\" <seamus@abshere.net>", "msg_from_op": true, "msg_subject": "A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "Hi Seamus,\n\nOn Mon, Feb 15, 2021 at 5:28 PM Seamus Abshere <seamus@abshere.net> wrote:\n> It turns out parallel_workers may be a useful reloption for certain uses of partitioned tables, at least if they're made up of fancy column store partitions (see https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).\n>\n> Would somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:\n>\n> diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> index cd3fdd259c..f1ade035ac 100644\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -3751,6 +3751,7 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages,\n> * If the user has set the parallel_workers reloption, use that; otherwise\n> * select a default number of workers.\n> */\n> + // I want to affect this\n> if (rel->rel_parallel_workers != -1)\n> parallel_workers = rel->rel_parallel_workers;\n> else\n>\n> so I do this\n>\n> diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c\n> index c687d3ee9e..597b209bfb 100644\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -1961,13 +1961,15 @@ build_local_reloptions(local_relopts *relopts, Datum options, bool validate)\n> bytea *\n> partitioned_table_reloptions(Datum reloptions, bool validate)\n> {\n> - /*\n> - * There are no options for partitioned tables yet, but this is able to do\n> - * some validation.\n> - */\n> + static const relopt_parse_elt tab[] = {\n> + {\"parallel_workers\", RELOPT_TYPE_INT,\n> + offsetof(StdRdOptions, parallel_workers)},\n> + };\n> +\n> return (bytea *) build_reloptions(reloptions, validate,\n> RELOPT_KIND_PARTITIONED,\n> - 0, NULL, 0);\n> + sizeof(StdRdOptions),\n> + tab, lengthof(tab));\n> }\n>\n> That \"works\":\n>\n> postgres=# alter table test_3pd_cstore_partitioned set (parallel_workers = 33);\n> ALTER TABLE\n> postgres=# select relname, relkind, reloptions from pg_class where relname = 'test_3pd_cstore_partitioned';\n> relname | relkind | reloptions\n> -----------------------------+---------+-----------------------\n> test_3pd_cstore_partitioned | p | {parallel_workers=33}\n> (1 row)\n>\n> But it seems to be ignored:\n>\n> diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> index cd3fdd259c..c68835ce38 100644\n> --- a/src/backend/optimizer/path/allpaths.c\n> +++ b/src/backend/optimizer/path/allpaths.c\n> @@ -3751,6 +3751,8 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages,\n> * If the user has set the parallel_workers reloption, use that; otherwise\n> * select a default number of workers.\n> */\n> + // I want to affect this, but this assertion always passes\n> + Assert(rel->rel_parallel_workers == -1)\n> if (rel->rel_parallel_workers != -1)\n> parallel_workers = rel->rel_parallel_workers;\n> else\n\nYou may see by inspecting the callers of compute_parallel_worker()\nthat it never gets called on a partitioned table, only its leaf\npartitions. Maybe you could try calling compute_parallel_worker()\nsomewhere in add_paths_to_append_rel(), which has this code to figure\nout parallel_workers to use for a parallel Append path for a given\npartitioned table:\n\n /* Find the highest number of workers requested for any subpath. */\n foreach(lc, partial_subpaths)\n {\n Path *path = lfirst(lc);\n\n parallel_workers = Max(parallel_workers, path->parallel_workers);\n }\n Assert(parallel_workers > 0);\n\n /*\n * If the use of parallel append is permitted, always request at least\n * log2(# of children) workers. We assume it can be useful to have\n * extra workers in this case because they will be spread out across\n * the children. The precise formula is just a guess, but we don't\n * want to end up with a radically different answer for a table with N\n * partitions vs. an unpartitioned table with the same data, so the\n * use of some kind of log-scaling here seems to make some sense.\n */\n if (enable_parallel_append)\n {\n parallel_workers = Max(parallel_workers,\n fls(list_length(live_childrels)));\n parallel_workers = Min(parallel_workers,\n max_parallel_workers_per_gather);\n }\n Assert(parallel_workers > 0);\n\n /* Generate a partial append path. */\n appendpath = create_append_path(root, rel, NIL, partial_subpaths,\n NIL, NULL, parallel_workers,\n enable_parallel_append,\n -1);\n\nNote that the 'rel' in this code refers to the partitioned table for\nwhich an Append path is being considered, so compute_parallel_worker()\nusing that 'rel' would use the partitioned table's\nrel_parallel_workers as you are trying to do.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:53:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "hi,\n\nHere we go, my first patch... solves https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55@www.fastmail.com\n\nBest,\nSeamus\n\nOn Mon, Feb 15, 2021, at 3:53 AM, Amit Langote wrote:\n> Hi Seamus,\n> \n> On Mon, Feb 15, 2021 at 5:28 PM Seamus Abshere <seamus@abshere.net> wrote:\n> > It turns out parallel_workers may be a useful reloption for certain uses of partitioned tables, at least if they're made up of fancy column store partitions (see https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).\n> >\n> > Would somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:\n> >\n> > diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> > index cd3fdd259c..f1ade035ac 100644\n> > --- a/src/backend/optimizer/path/allpaths.c\n> > +++ b/src/backend/optimizer/path/allpaths.c\n> > @@ -3751,6 +3751,7 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages,\n> > * If the user has set the parallel_workers reloption, use that; otherwise\n> > * select a default number of workers.\n> > */\n> > + // I want to affect this\n> > if (rel->rel_parallel_workers != -1)\n> > parallel_workers = rel->rel_parallel_workers;\n> > else\n> >\n> > so I do this\n> >\n> > diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c\n> > index c687d3ee9e..597b209bfb 100644\n> > --- a/src/backend/access/common/reloptions.c\n> > +++ b/src/backend/access/common/reloptions.c\n> > @@ -1961,13 +1961,15 @@ build_local_reloptions(local_relopts *relopts, Datum options, bool validate)\n> > bytea *\n> > partitioned_table_reloptions(Datum reloptions, bool validate)\n> > {\n> > - /*\n> > - * There are no options for partitioned tables yet, but this is able to do\n> > - * some validation.\n> > - */\n> > + static const relopt_parse_elt tab[] = {\n> > + {\"parallel_workers\", RELOPT_TYPE_INT,\n> > + offsetof(StdRdOptions, parallel_workers)},\n> > + };\n> > +\n> > return (bytea *) build_reloptions(reloptions, validate,\n> > RELOPT_KIND_PARTITIONED,\n> > - 0, NULL, 0);\n> > + sizeof(StdRdOptions),\n> > + tab, lengthof(tab));\n> > }\n> >\n> > That \"works\":\n> >\n> > postgres=# alter table test_3pd_cstore_partitioned set (parallel_workers = 33);\n> > ALTER TABLE\n> > postgres=# select relname, relkind, reloptions from pg_class where relname = 'test_3pd_cstore_partitioned';\n> > relname | relkind | reloptions\n> > -----------------------------+---------+-----------------------\n> > test_3pd_cstore_partitioned | p | {parallel_workers=33}\n> > (1 row)\n> >\n> > But it seems to be ignored:\n> >\n> > diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c\n> > index cd3fdd259c..c68835ce38 100644\n> > --- a/src/backend/optimizer/path/allpaths.c\n> > +++ b/src/backend/optimizer/path/allpaths.c\n> > @@ -3751,6 +3751,8 @@ compute_parallel_worker(RelOptInfo *rel, double heap_pages, double index_pages,\n> > * If the user has set the parallel_workers reloption, use that; otherwise\n> > * select a default number of workers.\n> > */\n> > + // I want to affect this, but this assertion always passes\n> > + Assert(rel->rel_parallel_workers == -1)\n> > if (rel->rel_parallel_workers != -1)\n> > parallel_workers = rel->rel_parallel_workers;\n> > else\n> \n> You may see by inspecting the callers of compute_parallel_worker()\n> that it never gets called on a partitioned table, only its leaf\n> partitions. Maybe you could try calling compute_parallel_worker()\n> somewhere in add_paths_to_append_rel(), which has this code to figure\n> out parallel_workers to use for a parallel Append path for a given\n> partitioned table:\n> \n> /* Find the highest number of workers requested for any subpath. */\n> foreach(lc, partial_subpaths)\n> {\n> Path *path = lfirst(lc);\n> \n> parallel_workers = Max(parallel_workers, path->parallel_workers);\n> }\n> Assert(parallel_workers > 0);\n> \n> /*\n> * If the use of parallel append is permitted, always request at least\n> * log2(# of children) workers. We assume it can be useful to have\n> * extra workers in this case because they will be spread out across\n> * the children. The precise formula is just a guess, but we don't\n> * want to end up with a radically different answer for a table with N\n> * partitions vs. an unpartitioned table with the same data, so the\n> * use of some kind of log-scaling here seems to make some sense.\n> */\n> if (enable_parallel_append)\n> {\n> parallel_workers = Max(parallel_workers,\n> fls(list_length(live_childrels)));\n> parallel_workers = Min(parallel_workers,\n> max_parallel_workers_per_gather);\n> }\n> Assert(parallel_workers > 0);\n> \n> /* Generate a partial append path. */\n> appendpath = create_append_path(root, rel, NIL, partial_subpaths,\n> NIL, NULL, parallel_workers,\n> enable_parallel_append,\n> -1);\n> \n> Note that the 'rel' in this code refers to the partitioned table for\n> which an Append path is being considered, so compute_parallel_worker()\n> using that 'rel' would use the partitioned table's\n> rel_parallel_workers as you are trying to do.\n> \n> -- \n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>", "msg_date": "Mon, 15 Feb 2021 10:42:14 -0500", "msg_from": "\"Seamus Abshere\" <seamus@abshere.net>", "msg_from_op": true, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Mon, 2021-02-15 at 17:53 +0900, Amit Langote wrote:\n> On Mon, Feb 15, 2021 at 5:28 PM Seamus Abshere <seamus@abshere.net> wrote:\n> > It turns out parallel_workers may be a useful reloption for certain uses of partitioned tables,\n> > at least if they're made up of fancy column store partitions (see\n> > https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).\n> > Would somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:\n> \n> You may see by inspecting the callers of compute_parallel_worker()\n> that it never gets called on a partitioned table, only its leaf\n> partitions. Maybe you could try calling compute_parallel_worker()\n> somewhere in add_paths_to_append_rel(), which has this code to figure\n> out parallel_workers to use for a parallel Append path for a given\n> partitioned table:\n> \n> /* Find the highest number of workers requested for any subpath. */\n> foreach(lc, partial_subpaths)\n> {\n> Path *path = lfirst(lc);\n> \n> parallel_workers = Max(parallel_workers, path->parallel_workers);\n> }\n> Assert(parallel_workers > 0);\n> \n> /*\n> * If the use of parallel append is permitted, always request at least\n> * log2(# of children) workers. We assume it can be useful to have\n> * extra workers in this case because they will be spread out across\n> * the children. The precise formula is just a guess, but we don't\n> * want to end up with a radically different answer for a table with N\n> * partitions vs. an unpartitioned table with the same data, so the\n> * use of some kind of log-scaling here seems to make some sense.\n> */\n> if (enable_parallel_append)\n> {\n> parallel_workers = Max(parallel_workers,\n> fls(list_length(live_childrels)));\n> parallel_workers = Min(parallel_workers,\n> max_parallel_workers_per_gather);\n> }\n> Assert(parallel_workers > 0);\n> \n> Note that the 'rel' in this code refers to the partitioned table for\n> which an Append path is being considered, so compute_parallel_worker()\n> using that 'rel' would use the partitioned table's\n> rel_parallel_workers as you are trying to do.\n\nNote that there is a second chunk of code quite like that one a few\nlines down from there that would also have to be modified.\n\nI am +1 on allowing to override the degree of parallelism on a parallel\nappend. If \"parallel_workers\" on the partitioned table is an option for\nthat, it might be a simple solution. On the other hand, perhaps it would\nbe less confusing to have a different storage parameter name rather than\nhaving \"parallel_workers\" do double duty.\n\nAlso, since there is a design rule that storage parameters can only be used\non partitions, we would have to change that - is that a problem for anybody?\n\n\nThere is another related consideration that doesn't need to be addressed\nby this patch, but that is somewhat related: if the executor prunes some\npartitions, the degree of parallelism is unaffected, right?\nSo if the planner decides to use 24 workers for 25 partitions, and the\nexecutor discards all but one of these partition scans, we would end up\nwith 24 workers scanning a single partition.\n\nI am not sure how that could be improved.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 15 Feb 2021 17:06:52 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "Hi Seamus,\n\nPlease see my reply below.\n\nOn Tue, Feb 16, 2021 at 1:35 AM Seamus Abshere <seamus@abshere.net> wrote:\n> On Mon, Feb 15, 2021, at 3:53 AM, Amit Langote wrote:\n> > On Mon, Feb 15, 2021 at 5:28 PM Seamus Abshere <seamus@abshere.net> wrote:\n> > > It turns out parallel_workers may be a useful reloption for certain uses of partitioned tables, at least if they're made up of fancy column store partitions (see https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).\n> > >\n> > > Would somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:\n>\n> > You may see by inspecting the callers of compute_parallel_worker()\n> > that it never gets called on a partitioned table, only its leaf\n> > partitions. Maybe you could try calling compute_parallel_worker()\n> > somewhere in add_paths_to_append_rel(), which has this code to figure\n> > out parallel_workers to use for a parallel Append path for a given\n> > partitioned table:\n> >\n> > /* Find the highest number of workers requested for any subpath. */\n> > foreach(lc, partial_subpaths)\n> > {\n> > Path *path = lfirst(lc);\n> >\n> > parallel_workers = Max(parallel_workers, path->parallel_workers);\n> > }\n> > Assert(parallel_workers > 0);\n> >\n> > /*\n> > * If the use of parallel append is permitted, always request at least\n> > * log2(# of children) workers. We assume it can be useful to have\n> > * extra workers in this case because they will be spread out across\n> > * the children. The precise formula is just a guess, but we don't\n> > * want to end up with a radically different answer for a table with N\n> > * partitions vs. an unpartitioned table with the same data, so the\n> > * use of some kind of log-scaling here seems to make some sense.\n> > */\n> > if (enable_parallel_append)\n> > {\n> > parallel_workers = Max(parallel_workers,\n> > fls(list_length(live_childrels)));\n> > parallel_workers = Min(parallel_workers,\n> > max_parallel_workers_per_gather);\n> > }\n> > Assert(parallel_workers > 0);\n> >\n> > /* Generate a partial append path. */\n> > appendpath = create_append_path(root, rel, NIL, partial_subpaths,\n> > NIL, NULL, parallel_workers,\n> > enable_parallel_append,\n> > -1);\n> >\n> > Note that the 'rel' in this code refers to the partitioned table for\n> > which an Append path is being considered, so compute_parallel_worker()\n> > using that 'rel' would use the partitioned table's\n> > rel_parallel_workers as you are trying to do.\n>\n> Here we go, my first patch... solves https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55@www.fastmail.com\n\nThanks for sending the patch here.\n\nIt seems you haven't made enough changes for reloptions code to\nrecognize parallel_workers as valid for partitioned tables, because\neven with the patch applied, I get this:\n\ncreate table rp (a int) partition by range (a);\ncreate table rp1 partition of rp for values from (minvalue) to (0);\ncreate table rp2 partition of rp for values from (0) to (maxvalue);\nalter table rp set (parallel_workers = 1);\nERROR: unrecognized parameter \"parallel_workers\"\n\nYou need this:\n\ndiff --git a/src/backend/access/common/reloptions.c\nb/src/backend/access/common/reloptions.c\nindex 029a73325e..9eb8a0c10d 100644\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -377,7 +377,7 @@ static relopt_int intRelOpts[] =\n {\n \"parallel_workers\",\n \"Number of parallel processes that can be used per\nexecutor node for this relation.\",\n- RELOPT_KIND_HEAP,\n+ RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n ShareUpdateExclusiveLock\n },\n -1, 0, 1024\n\nwhich tells reloptions parsing code that parallel_workers is\nacceptable for both heap and partitioned relations.\n\nOther comments on the patch:\n\n* This function calling style, where the first argument is not placed\non the same line as the function itself, is not very common in\nPostgres:\n\n+ /* First see if there is a root-level setting for parallel_workers */\n+ parallel_workers = compute_parallel_worker(\n+ rel,\n+ -1,\n+ -1,\n+ max_parallel_workers_per_gather\n+\n\nThis makes the new code look very different from the rest of the\ncodebase. Better to stick to existing styles.\n\n2. It might be a good idea to use this opportunity to add a function,\nsay compute_append_parallel_workers(), for the code that does what the\nfunction name says. Then the patch will simply add the new\ncompute_parallel_worker() call at the top of that function.\n\n3. I think we should consider the Append parent relation's\nparallel_workers ONLY if it is a partitioned relation, because it\ndoesn't make a lot of sense for other types of parent relations. So\nthe new code should look like this:\n\nif (IS_PARTITIONED_REL(rel))\n parallel_workers = compute_parallel_worker(rel, -1, -1,\nmax_parallel_workers_per_gather);\n\n4. Maybe it also doesn't make sense to consider the parent relation's\nparallel_workers if Parallel Append is disabled\n(enable_parallel_append = off). That's because with a simple\n(non-parallel) Append running under Gather, all launched parallel\nworkers process the same partition before moving to the next one.\nOTOH, one's intention of setting parallel_workers on the parent\npartitioned table would most likely be to distribute workers across\npartitions, which is only possible with parallel Append\n(enable_parallel_append = on). So, the new code should look like\nthis:\n\nif (IS_PARTITIONED_REL(rel) && enable_parallel_append)\n parallel_workers = compute_parallel_worker(rel, -1, -1,\nmax_parallel_workers_per_gather);\n\nBTW, please consider bottom-posting like I've done in this reply,\nbecause that makes it easier to follow discussions involving patch\nreviews that can potentially take many emails to reach conclusions.\n\n\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Feb 2021 15:05:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 16, 2021 at 1:06 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Mon, 2021-02-15 at 17:53 +0900, Amit Langote wrote:\n> > On Mon, Feb 15, 2021 at 5:28 PM Seamus Abshere <seamus@abshere.net> wrote:\n> > > It turns out parallel_workers may be a useful reloption for certain uses of partitioned tables,\n> > > at least if they're made up of fancy column store partitions (see\n> > > https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55%40www.fastmail.com).\n> > > Would somebody tell me what I'm doing wrong? I would love to submit a patch but I'm stuck:\n> >\n> > You may see by inspecting the callers of compute_parallel_worker()\n> > that it never gets called on a partitioned table, only its leaf\n> > partitions. Maybe you could try calling compute_parallel_worker()\n> > somewhere in add_paths_to_append_rel(), which has this code to figure\n> > out parallel_workers to use for a parallel Append path for a given\n> > partitioned table:\n> >\n> > /* Find the highest number of workers requested for any subpath. */\n> > foreach(lc, partial_subpaths)\n> > {\n> > Path *path = lfirst(lc);\n> >\n> > parallel_workers = Max(parallel_workers, path->parallel_workers);\n> > }\n> > Assert(parallel_workers > 0);\n> >\n> > /*\n> > * If the use of parallel append is permitted, always request at least\n> > * log2(# of children) workers. We assume it can be useful to have\n> > * extra workers in this case because they will be spread out across\n> > * the children. The precise formula is just a guess, but we don't\n> > * want to end up with a radically different answer for a table with N\n> > * partitions vs. an unpartitioned table with the same data, so the\n> > * use of some kind of log-scaling here seems to make some sense.\n> > */\n> > if (enable_parallel_append)\n> > {\n> > parallel_workers = Max(parallel_workers,\n> > fls(list_length(live_childrels)));\n> > parallel_workers = Min(parallel_workers,\n> > max_parallel_workers_per_gather);\n> > }\n> > Assert(parallel_workers > 0);\n> >\n> > Note that the 'rel' in this code refers to the partitioned table for\n> > which an Append path is being considered, so compute_parallel_worker()\n> > using that 'rel' would use the partitioned table's\n> > rel_parallel_workers as you are trying to do.\n>\n> Note that there is a second chunk of code quite like that one a few\n> lines down from there that would also have to be modified.\n>\n> I am +1 on allowing to override the degree of parallelism on a parallel\n> append. If \"parallel_workers\" on the partitioned table is an option for\n> that, it might be a simple solution. On the other hand, perhaps it would\n> be less confusing to have a different storage parameter name rather than\n> having \"parallel_workers\" do double duty.\n>\n> Also, since there is a design rule that storage parameters can only be used\n> on partitions, we would have to change that - is that a problem for anybody?\n\nI am not aware of a rule that suggests that parallel_workers is always\ninterpreted using storage-level considerations. If that is indeed a\npopular interpretation at this point, then yes, we should be open to\nconsidering a new name for the parameter that this patch wants to add.\n\nMaybe parallel_append_workers? Perhaps not a bad idea in this patch's\ncase, but considering that we may want to expand the support of\ncross-partition parallelism to operations other than querying, maybe\nsomething else?\n\nThis reminds me of something I forgot to mention in my review of the\npatch -- it should also update the documentation of parallel_workers\non the CREATE TABLE page to mention that it will be interpreted a bit\ndifferently for partitioned tables than for regular storage-bearing\nrelations. Workers specified for partitioned tables would be\ndistributed by the executor over its partitions, unlike with\nstorage-bearing relations, where the distribution of specified workers\nis controlled by the AM using storage-level considerations.\n\n> There is another related consideration that doesn't need to be addressed\n> by this patch, but that is somewhat related: if the executor prunes some\n> partitions, the degree of parallelism is unaffected, right?\n\nThat's correct. Launched workers could be less than planned, but that\nwould not have anything to do with executor pruning.\n\n> So if the planner decides to use 24 workers for 25 partitions, and the\n> executor discards all but one of these partition scans, we would end up\n> with 24 workers scanning a single partition.\n\nI do remember pondering this when testing my patches to improve the\nperformance of executing a generic plan to scan a partitioned table\nwhere runtime pruning is possible. Here is an example:\n\ncreate table hp (a int) partition by hash (a);\nselect 'create table hp' || i || ' partition of hp for values with\n(modulus 100, remainder ' || i || ');' from generate_series(0, 99) i;\n\\gexec\ninsert into hp select generate_series(0, 1000000);\nalter table hp set (parallel_workers = 16);\nset plan_cache_mode to force_generic_plan ;\nset max_parallel_workers_per_gather to 16;\nprepare q as select * from hp where a = $1;\n\nexplain (analyze, verbose) execute q (1);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..14426.50 rows=5675 width=4) (actual\ntime=2.370..25.002 rows=1 loops=1)\n Output: hp.a\n Workers Planned: 16\n Workers Launched: 7\n -> Parallel Append (cost=0.00..12859.00 rows=400 width=4) (actual\ntime=0.006..0.384 rows=0 loops=8)\n Worker 0: actual time=0.001..0.001 rows=0 loops=1\n Worker 1: actual time=0.001..0.001 rows=0 loops=1\n Worker 2: actual time=0.001..0.001 rows=0 loops=1\n Worker 3: actual time=0.001..0.001 rows=0 loops=1\n Worker 4: actual time=0.001..0.001 rows=0 loops=1\n Worker 5: actual time=0.001..0.001 rows=0 loops=1\n Worker 6: actual time=0.001..0.001 rows=0 loops=1\n Subplans Removed: 99\n -> Parallel Seq Scan on public.hp40 hp_1 (cost=0.00..126.50\nrows=33 width=4) (actual time=0.041..3.060 rows=1 loops=1)\n Output: hp_1.a\n Filter: (hp_1.a = $1)\n Rows Removed by Filter: 9813\n Planning Time: 5.543 ms\n Execution Time: 25.139 ms\n(19 rows)\n\ndeallocate q;\nset max_parallel_workers_per_gather to 0;\nprepare q as select * from hp where a = $1;\nexplain (analyze, verbose) execute q (1);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Append (cost=0.00..18754.88 rows=5675 width=4) (actual\ntime=0.029..2.474 rows=1 loops=1)\n Subplans Removed: 99\n -> Seq Scan on public.hp40 hp_1 (cost=0.00..184.25 rows=56\nwidth=4) (actual time=0.028..2.471 rows=1 loops=1)\n Output: hp_1.a\n Filter: (hp_1.a = $1)\n Rows Removed by Filter: 9813\n Planning Time: 2.231 ms\n Execution Time: 2.535 ms\n(8 rows)\n\nComparing the Execution Times above, it's clear that Gather and\nworkers are pure overhead in this case.\n\nAlthough in cases where one expects runtime pruning to be useful, the\nplan itself is very unlikely to be parallelized. For example, when the\nindividual partition scans are Index Scans.\n\ndeallocate q;\ncreate index on hp (a);\nalter table hp set (parallel_workers = 16);\nanalyze;\nset max_parallel_workers_per_gather to 16;\nprepare q as select * from hp where a = $1;\nexplain (analyze, verbose) execute q (1);\n QUERY\nPLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n-\n Append (cost=0.29..430.75 rows=100 width=4) (actual\ntime=0.043..0.046 rows=1 loops=1)\n Subplans Removed: 99\n -> Index Only Scan using hp40_a_idx on public.hp40 hp_1\n(cost=0.29..4.30 rows=1 width=4) (actual time=0.042..0.044 rows=1\nloops=1)\n Output: hp_1.a\n Index Cond: (hp_1.a = $1)\n Heap Fetches: 0\n Planning Time: 13.769 ms\n Execution Time: 0.115 ms\n(8 rows)\n\n> I am not sure how that could be improved.\n\nThe planner currently ignores runtime pruning optimization when\nassigning costs to the Append path, so fixing that would be a good\nstart. There are efforts underway for that, such as [1].\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/32/2829/\n\n\n", "msg_date": "Tue, 16 Feb 2021 16:29:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, 2021-02-16 at 16:29 +0900, Amit Langote wrote:\n> > I am +1 on allowing to override the degree of parallelism on a parallel\n> > append. If \"parallel_workers\" on the partitioned table is an option for\n> > that, it might be a simple solution. On the other hand, perhaps it would\n> > be less confusing to have a different storage parameter name rather than\n> > having \"parallel_workers\" do double duty.\n> > Also, since there is a design rule that storage parameters can only be used\n> > on partitions, we would have to change that - is that a problem for anybody?\n> \n> I am not aware of a rule that suggests that parallel_workers is always\n> interpreted using storage-level considerations. If that is indeed a\n> popular interpretation at this point, then yes, we should be open to\n> considering a new name for the parameter that this patch wants to add.\n\nWell, https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\nsays:\n\n \"Specifying these parameters for partitioned tables is not supported,\n but you may specify them for individual leaf partitions.\"\n\nIf we re-purpose \"parallel_workers\" like this, we'd have to change this.\n\nThen for a normal table, \"parallel_workers\" would mean how many workers\nwork on a parallel table scan. For a partitioned table, it determines\nhow many workers work on a parallel append.\n\nPerhaps that is similar enough that it is not confusing.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 16 Feb 2021 15:01:59 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, Feb 16, 2021 at 11:02 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Tue, 2021-02-16 at 16:29 +0900, Amit Langote wrote:\n> > > I am +1 on allowing to override the degree of parallelism on a parallel\n> > > append. If \"parallel_workers\" on the partitioned table is an option for\n> > > that, it might be a simple solution. On the other hand, perhaps it would\n> > > be less confusing to have a different storage parameter name rather than\n> > > having \"parallel_workers\" do double duty.\n> > > Also, since there is a design rule that storage parameters can only be used\n> > > on partitions, we would have to change that - is that a problem for anybody?\n> >\n> > I am not aware of a rule that suggests that parallel_workers is always\n> > interpreted using storage-level considerations. If that is indeed a\n> > popular interpretation at this point, then yes, we should be open to\n> > considering a new name for the parameter that this patch wants to add.\n>\n> Well, https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n> says:\n>\n> \"Specifying these parameters for partitioned tables is not supported,\n> but you may specify them for individual leaf partitions.\"\n>\n> If we re-purpose \"parallel_workers\" like this, we'd have to change this.\n\nRight, as I mentioned in my reply, the patch will need to update the\ndocumentation.\n\n> Then for a normal table, \"parallel_workers\" would mean how many workers\n> work on a parallel table scan. For a partitioned table, it determines\n> how many workers work on a parallel append.\n>\n> Perhaps that is similar enough that it is not confusing.\n\nI tend to agree with that.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Feb 2021 17:48:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "> hi,\n> \n> Here we go, my first patch... solves\n> https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520\n> ed55@www.fastmail.com\n> \n\nHi,\n\npartitioned_table_reloptions(Datum reloptions, bool validate)\n {\n+\tstatic const relopt_parse_elt tab[] = {\n+\t\t{\"parallel_workers\", RELOPT_TYPE_INT,\n+\t\toffsetof(StdRdOptions, parallel_workers)},\n+\t};\n+\n \treturn (bytea *) build_reloptions(reloptions, validate,\n \t\t\t\t\t\t\t\t\t RELOPT_KIND_PARTITIONED,\n-\t\t\t\t\t\t\t\t\t 0, NULL, 0);\n+\t\t\t\t\t\t\t\t\t sizeof(StdRdOptions),\n+\t\t\t\t\t\t\t\t\t tab, lengthof(tab));\n }\n\nI noticed that you plan to store the parallel_workers in the same struct(StdRdOptions) as normal heap relation.\nIt seems better to store it in a separate struct.\n\nAnd as commit 1bbd608 said:\n----\n> Splitting things has the advantage to\n> make the information stored in rd_options include only the necessary\n> information, reducing the amount of memory used for a relcache entry\n> with partitioned tables if new reloptions are introduced at this level.\n----\n\nWhat do you think?\n\nBest regards,\nHouzj\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 18 Feb 2021 09:06:26 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Thu, Feb 18, 2021 at 6:06 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:\n> > Here we go, my first patch... solves\n> > https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520\n> > ed55@www.fastmail.com\n> >\n>\n> Hi,\n>\n> partitioned_table_reloptions(Datum reloptions, bool validate)\n> {\n> + static const relopt_parse_elt tab[] = {\n> + {\"parallel_workers\", RELOPT_TYPE_INT,\n> + offsetof(StdRdOptions, parallel_workers)},\n> + };\n> +\n> return (bytea *) build_reloptions(reloptions, validate,\n> RELOPT_KIND_PARTITIONED,\n> - 0, NULL, 0);\n> + sizeof(StdRdOptions),\n> + tab, lengthof(tab));\n> }\n>\n> I noticed that you plan to store the parallel_workers in the same struct(StdRdOptions) as normal heap relation.\n> It seems better to store it in a separate struct.\n>\n> And as commit 1bbd608 said:\n> ----\n> > Splitting things has the advantage to\n> > make the information stored in rd_options include only the necessary\n> > information, reducing the amount of memory used for a relcache entry\n> > with partitioned tables if new reloptions are introduced at this level.\n> ----\n>\n> What do you think?\n\nThat may be a good idea. So instead of referring to the\nparallel_workers in StdRdOptions, define a new one, say,\nPartitionedTableRdOptions as follows and refer to it in\npartitioned_table_reloptions():\n\ntypedef struct PartitionedTableRdOptions\n{\n int32 vl_len_; /* varlena header (do not touch directly!) */\n int parallel_workers; /* max number of parallel workers */\n} PartitionedTableRdOptions;\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Feb 2021 18:21:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, Feb 16, 2021 at 3:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 16, 2021 at 1:35 AM Seamus Abshere <seamus@abshere.net> wrote:\n> > Here we go, my first patch... solves https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55@www.fastmail.com\n>\n> Thanks for sending the patch here.\n>\n> It seems you haven't made enough changes for reloptions code to\n> recognize parallel_workers as valid for partitioned tables, because\n> even with the patch applied, I get this:\n>\n> create table rp (a int) partition by range (a);\n> create table rp1 partition of rp for values from (minvalue) to (0);\n> create table rp2 partition of rp for values from (0) to (maxvalue);\n> alter table rp set (parallel_workers = 1);\n> ERROR: unrecognized parameter \"parallel_workers\"\n>\n> You need this:\n>\n> diff --git a/src/backend/access/common/reloptions.c\n> b/src/backend/access/common/reloptions.c\n> index 029a73325e..9eb8a0c10d 100644\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -377,7 +377,7 @@ static relopt_int intRelOpts[] =\n> {\n> \"parallel_workers\",\n> \"Number of parallel processes that can be used per\n> executor node for this relation.\",\n> - RELOPT_KIND_HEAP,\n> + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> ShareUpdateExclusiveLock\n> },\n> -1, 0, 1024\n>\n> which tells reloptions parsing code that parallel_workers is\n> acceptable for both heap and partitioned relations.\n>\n> Other comments on the patch:\n>\n> * This function calling style, where the first argument is not placed\n> on the same line as the function itself, is not very common in\n> Postgres:\n>\n> + /* First see if there is a root-level setting for parallel_workers */\n> + parallel_workers = compute_parallel_worker(\n> + rel,\n> + -1,\n> + -1,\n> + max_parallel_workers_per_gather\n> +\n>\n> This makes the new code look very different from the rest of the\n> codebase. Better to stick to existing styles.\n>\n> 2. It might be a good idea to use this opportunity to add a function,\n> say compute_append_parallel_workers(), for the code that does what the\n> function name says. Then the patch will simply add the new\n> compute_parallel_worker() call at the top of that function.\n>\n> 3. I think we should consider the Append parent relation's\n> parallel_workers ONLY if it is a partitioned relation, because it\n> doesn't make a lot of sense for other types of parent relations. So\n> the new code should look like this:\n>\n> if (IS_PARTITIONED_REL(rel))\n> parallel_workers = compute_parallel_worker(rel, -1, -1,\n> max_parallel_workers_per_gather);\n>\n> 4. Maybe it also doesn't make sense to consider the parent relation's\n> parallel_workers if Parallel Append is disabled\n> (enable_parallel_append = off). That's because with a simple\n> (non-parallel) Append running under Gather, all launched parallel\n> workers process the same partition before moving to the next one.\n> OTOH, one's intention of setting parallel_workers on the parent\n> partitioned table would most likely be to distribute workers across\n> partitions, which is only possible with parallel Append\n> (enable_parallel_append = on). So, the new code should look like\n> this:\n>\n> if (IS_PARTITIONED_REL(rel) && enable_parallel_append)\n> parallel_workers = compute_parallel_worker(rel, -1, -1,\n> max_parallel_workers_per_gather);\n\nHere is an updated version of the Seamus' patch that takes into\naccount these and other comments received on this thread so far.\nMaybe warrants adding some tests too but I haven't.\n\nSeamus, please register this patch in the next commit-fest:\nhttps://commitfest.postgresql.org/32/\n\nIf you haven't already, you will need to create a community account to\nuse that site.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Feb 2021 16:30:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "hi Amit,\n\nThanks so much for doing this. I had created\n\nhttps://commitfest.postgresql.org/32/2987/\n\nand it looks like it now shows your patch as the one to use. Let me know if I should do anything else.\n\nBest,\nSeamus\n\nOn Fri, Feb 19, 2021, at 2:30 AM, Amit Langote wrote:\n> On Tue, Feb 16, 2021 at 3:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Feb 16, 2021 at 1:35 AM Seamus Abshere <seamus@abshere.net> wrote:\n> > > Here we go, my first patch... solves https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55@www.fastmail.com\n> >\n> > Thanks for sending the patch here.\n> >\n> > It seems you haven't made enough changes for reloptions code to\n> > recognize parallel_workers as valid for partitioned tables, because\n> > even with the patch applied, I get this:\n> >\n> > create table rp (a int) partition by range (a);\n> > create table rp1 partition of rp for values from (minvalue) to (0);\n> > create table rp2 partition of rp for values from (0) to (maxvalue);\n> > alter table rp set (parallel_workers = 1);\n> > ERROR: unrecognized parameter \"parallel_workers\"\n> >\n> > You need this:\n> >\n> > diff --git a/src/backend/access/common/reloptions.c\n> > b/src/backend/access/common/reloptions.c\n> > index 029a73325e..9eb8a0c10d 100644\n> > --- a/src/backend/access/common/reloptions.c\n> > +++ b/src/backend/access/common/reloptions.c\n> > @@ -377,7 +377,7 @@ static relopt_int intRelOpts[] =\n> > {\n> > \"parallel_workers\",\n> > \"Number of parallel processes that can be used per\n> > executor node for this relation.\",\n> > - RELOPT_KIND_HEAP,\n> > + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> > ShareUpdateExclusiveLock\n> > },\n> > -1, 0, 1024\n> >\n> > which tells reloptions parsing code that parallel_workers is\n> > acceptable for both heap and partitioned relations.\n> >\n> > Other comments on the patch:\n> >\n> > * This function calling style, where the first argument is not placed\n> > on the same line as the function itself, is not very common in\n> > Postgres:\n> >\n> > + /* First see if there is a root-level setting for parallel_workers */\n> > + parallel_workers = compute_parallel_worker(\n> > + rel,\n> > + -1,\n> > + -1,\n> > + max_parallel_workers_per_gather\n> > +\n> >\n> > This makes the new code look very different from the rest of the\n> > codebase. Better to stick to existing styles.\n> >\n> > 2. It might be a good idea to use this opportunity to add a function,\n> > say compute_append_parallel_workers(), for the code that does what the\n> > function name says. Then the patch will simply add the new\n> > compute_parallel_worker() call at the top of that function.\n> >\n> > 3. I think we should consider the Append parent relation's\n> > parallel_workers ONLY if it is a partitioned relation, because it\n> > doesn't make a lot of sense for other types of parent relations. So\n> > the new code should look like this:\n> >\n> > if (IS_PARTITIONED_REL(rel))\n> > parallel_workers = compute_parallel_worker(rel, -1, -1,\n> > max_parallel_workers_per_gather);\n> >\n> > 4. Maybe it also doesn't make sense to consider the parent relation's\n> > parallel_workers if Parallel Append is disabled\n> > (enable_parallel_append = off). That's because with a simple\n> > (non-parallel) Append running under Gather, all launched parallel\n> > workers process the same partition before moving to the next one.\n> > OTOH, one's intention of setting parallel_workers on the parent\n> > partitioned table would most likely be to distribute workers across\n> > partitions, which is only possible with parallel Append\n> > (enable_parallel_append = on). So, the new code should look like\n> > this:\n> >\n> > if (IS_PARTITIONED_REL(rel) && enable_parallel_append)\n> > parallel_workers = compute_parallel_worker(rel, -1, -1,\n> > max_parallel_workers_per_gather);\n> \n> Here is an updated version of the Seamus' patch that takes into\n> account these and other comments received on this thread so far.\n> Maybe warrants adding some tests too but I haven't.\n> \n> Seamus, please register this patch in the next commit-fest:\n> https://commitfest.postgresql.org/32/\n> \n> If you haven't already, you will need to create a community account to\n> use that site.\n> \n> -- \n> Amit Langote\n> EDB: http://www.enterprisedb.com\n> \n> Attachments:\n> * v2-0001-Allow-setting-parallel_workers-on-partitioned-tab.patch\n\n\n", "msg_date": "Fri, 19 Feb 2021 09:53:46 -0500", "msg_from": "\"Seamus Abshere\" <seamus@abshere.net>", "msg_from_op": true, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, Feb 19, 2021 at 11:54 PM Seamus Abshere <seamus@abshere.net> wrote:\n> On Fri, Feb 19, 2021, at 2:30 AM, Amit Langote wrote:\n> > Here is an updated version of the Seamus' patch that takes into\n> > account these and other comments received on this thread so far.\n> > Maybe warrants adding some tests too but I haven't.\n> >\n> > Seamus, please register this patch in the next commit-fest:\n> > https://commitfest.postgresql.org/32/\n> >\n> > If you haven't already, you will need to create a community account to\n> > use that site.\n>\n> hi Amit,\n>\n> Thanks so much for doing this. I had created\n>\n> https://commitfest.postgresql.org/32/2987/\n\nAh, sorry, I had not checked. I updated the entry to add you as the author.\n\n> and it looks like it now shows your patch as the one to use. Let me know if I should do anything else.\n\nYou could take a look at the latest patch and see if you find any\nproblems with my or others' suggestions that I implemented in the v2\npatch. Also, please add regression tests for the new feature in\nsrc/test/regress/sql/select_parallel.sql.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Feb 2021 11:16:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, 2021-02-19 at 16:30 +0900, Amit Langote wrote:\n> On Tue, Feb 16, 2021 at 1:35 AM Seamus Abshere <seamus@abshere.net> wrote:\n> > > Here we go, my first patch... solves https://www.postgresql.org/message-id/7d6fdc20-857c-4cbe-ae2e-c0ff9520ed55@www.fastmail.com\n> \n> Here is an updated version of the Seamus' patch that takes into\n> account these and other comments received on this thread so far.\n> Maybe warrants adding some tests too but I haven't.\n\nYes, there should be regression tests.\n\nI gave the patch a spin, and it allows to raise the number of workers for\na parallel append as advertised.\n\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -1337,8 +1337,9 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n If a table parameter value is set and the\n equivalent <literal>toast.</literal> parameter is not, the TOAST table\n will use the table's parameter value.\n- Specifying these parameters for partitioned tables is not supported,\n- but you may specify them for individual leaf partitions.\n+ Specifying most of these parameters for partitioned tables is not\n+ supported, but you may specify them for individual leaf partitions;\n+ refer to the description of individual parameters for more details.\n </para>\n\nThis doesn't make me happy. Since the options themselves do not say if they\nare supported on partitioned tables or not, the reader is left in the dark.\n\nPerhaps:\n\n These options, with the exception of <literal>parallel_workers</literal>,\n are not supported on partitioned tables, but you may specify them for individual\n leaf partitions.\n\n@@ -1401,9 +1402,12 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n <para>\n This sets the number of workers that should be used to assist a parallel\n scan of this table. If not set, the system will determine a value based\n- on the relation size. The actual number of workers chosen by the planner\n- or by utility statements that use parallel scans may be less, for example\n- due to the setting of <xref linkend=\"guc-max-worker-processes\"/>.\n+ on the relation size. When set on a partitioned table, the specified\n+ number of workers will work on distinct partitions, so the number of\n+ partitions affected by the parallel operation should be taken into\n+ account. The actual number of workers chosen by the planner or by\n+ utility statements that use parallel scans may be less, for example due\n+ to the setting of <xref linkend=\"guc-max-worker-processes\"/>.\n </para>\n </listitem>\n </varlistentry>\n\nThe reader is left to believe that the default number of workers depends on the\nsize of the partitioned table, which is not entirely true.\n\nPerhaps:\n\n If not set, the system will determine a value based on the relation size and\n the number of scanned partitions.\n\n--- a/src/backend/optimizer/path/allpaths.c\n+++ b/src/backend/optimizer/path/allpaths.c\n@@ -1268,6 +1268,59 @@ set_append_rel_pathlist(PlannerInfo *root, RelOptInfo *rel,\n add_paths_to_append_rel(root, rel, live_childrels);\n }\n \n+/*\n+ * compute_append_parallel_workers\n+ * Computes the number of workers to assign to scan the subpaths appended\n+ * by a given Append path\n+ */\n+static int\n+compute_append_parallel_workers(RelOptInfo *rel, List *subpaths,\n+ int num_live_children,\n+ bool parallel_append)\n\nThe new function should have a prototype.\n\n+{\n+ ListCell *lc;\n+ int parallel_workers = 0;\n+\n+ /*\n+ * For partitioned rels, first see if there is a root-level setting for\n+ * parallel_workers. But only consider if a Parallel Append plan is\n+ * to be considered.\n+ */\n+ if (IS_PARTITIONED_REL(rel) && parallel_append)\n+ parallel_workers =\n+ compute_parallel_worker(rel, -1, -1,\n+ max_parallel_workers_per_gather);\n+\n+ /* Find the highest number of workers requested for any subpath. */\n+ foreach(lc, subpaths)\n+ foreach(lc, subpaths)\n+ {\n+ Path *path = lfirst(lc);\n+\n+ parallel_workers = Max(parallel_workers, path->parallel_workers);\n+ }\n+ Assert(parallel_workers > 0 || subpaths == NIL);\n+\n+ /*\n+ * If the use of parallel append is permitted, always request at least\n+ * log2(# of children) workers. We assume it can be useful to have\n+ * extra workers in this case because they will be spread out across\n+ * the children. The precise formula is just a guess, but we don't\n+ * want to end up with a radically different answer for a table with N\n+ * partitions vs. an unpartitioned table with the same data, so the\n+ * use of some kind of log-scaling here seems to make some sense.\n+ */\n+ if (parallel_append)\n+ {\n+ parallel_workers = Max(parallel_workers,\n+ fls(num_live_children));\n+ parallel_workers = Min(parallel_workers,\n+ max_parallel_workers_per_gather);\n+ }\n+ Assert(parallel_workers > 0);\n+\n+ return parallel_workers;\n+}\n\nThat means that it is not possible to *lower* the number of parallel workers\nwith this reloption, which seems to me a valid use case.\n\nI think that if the option is set, it should override the number of workers\ninherited from the partitions, and it should override the log2 default.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Sat, 20 Feb 2021 04:54:59 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "> > 4. Maybe it also doesn't make sense to consider the parent relation's\r\n> > parallel_workers if Parallel Append is disabled\r\n> > (enable_parallel_append = off). That's because with a simple\r\n> > (non-parallel) Append running under Gather, all launched parallel\r\n> > workers process the same partition before moving to the next one.\r\n> > OTOH, one's intention of setting parallel_workers on the parent\r\n> > partitioned table would most likely be to distribute workers across\r\n> > partitions, which is only possible with parallel Append\r\n> > (enable_parallel_append = on). So, the new code should look like\r\n> > this:\r\n> >\r\n> > if (IS_PARTITIONED_REL(rel) && enable_parallel_append)\r\n> > parallel_workers = compute_parallel_worker(rel, -1, -1,\r\n> > max_parallel_workers_per_gather);\r\n> \r\n> Here is an updated version of the Seamus' patch that takes into account these\r\n> and other comments received on this thread so far.\r\n> Maybe warrants adding some tests too but I haven't.\r\n> \r\n> Seamus, please register this patch in the next commit-fest:\r\n> https://commitfest.postgresql.org/32/\r\n> \r\n> If you haven't already, you will need to create a community account to use that\r\n> site.\r\n\r\nIt seems the patch does not include the code that get the parallel_workers from new struct \" PartitionedTableRdOptions \",\r\nDid I miss something ?\r\n\r\nBest regards,\r\nhouzj\r\n\r\n", "msg_date": "Tue, 23 Feb 2021 06:12:28 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 23, 2021 at 3:12 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> > Here is an updated version of the Seamus' patch that takes into account these\n> > and other comments received on this thread so far.\n> > Maybe warrants adding some tests too but I haven't.\n> >\n> > Seamus, please register this patch in the next commit-fest:\n> > https://commitfest.postgresql.org/32/\n> >\n> > If you haven't already, you will need to create a community account to use that\n> > site.\n>\n> It seems the patch does not include the code that get the parallel_workers from new struct \" PartitionedTableRdOptions \",\n> Did I miss something ?\n\nAren't the following hunks in the v2 patch what you meant?\n\ndiff --git a/src/backend/access/common/reloptions.c\nb/src/backend/access/common/reloptions.c\nindex c687d3ee9e..f8443d2361 100644\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -377,7 +377,7 @@ static relopt_int intRelOpts[] =\n {\n \"parallel_workers\",\n \"Number of parallel processes that can be used per executor node for\nthis relation.\",\n- RELOPT_KIND_HEAP,\n+ RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n ShareUpdateExclusiveLock\n },\n -1, 0, 1024\n@@ -1962,12 +1962,18 @@ bytea *\n partitioned_table_reloptions(Datum reloptions, bool validate)\n {\n /*\n- * There are no options for partitioned tables yet, but this is able to do\n- * some validation.\n+ * Currently the only setting known to be useful for partitioned tables\n+ * is parallel_workers.\n */\n+ static const relopt_parse_elt tab[] = {\n+ {\"parallel_workers\", RELOPT_TYPE_INT,\n+ offsetof(PartitionedTableRdOptions, parallel_workers)},\n+ };\n+\n return (bytea *) build_reloptions(reloptions, validate,\n RELOPT_KIND_PARTITIONED,\n- 0, NULL, 0);\n+ sizeof(PartitionedTableRdOptions),\n+ tab, lengthof(tab));\n }\n\n /*\n\ndiff --git a/src/include/utils/rel.h b/src/include/utils/rel.h\nindex 10b63982c0..fe114e0856 100644\n--- a/src/include/utils/rel.h\n+++ b/src/include/utils/rel.h\n@@ -308,6 +308,16 @@ typedef struct StdRdOptions\n bool vacuum_truncate; /* enables vacuum to truncate a relation */\n } StdRdOptions;\n\n+/*\n+ * PartitionedTableRdOptions\n+ * Contents of rd_options for partitioned tables\n+ */\n+typedef struct PartitionedTableRdOptions\n+{\n+ int32 vl_len_; /* varlena header (do not touch directly!) */\n+ int parallel_workers; /* max number of parallel workers */\n+} PartitionedTableRdOptions;\n+\n #define HEAP_MIN_FILLFACTOR 10\n #define HEAP_DEFAULT_FILLFACTOR 100\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Feb 2021 18:32:53 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "> > It seems the patch does not include the code that get the\r\n> > parallel_workers from new struct \" PartitionedTableRdOptions \", Did I miss\r\n> something ?\r\n> \r\n> Aren't the following hunks in the v2 patch what you meant?\r\n> \r\n> diff --git a/src/backend/access/common/reloptions.c\r\n> b/src/backend/access/common/reloptions.c\r\n> index c687d3ee9e..f8443d2361 100644\r\n> --- a/src/backend/access/common/reloptions.c\r\n> +++ b/src/backend/access/common/reloptions.c\r\n> @@ -377,7 +377,7 @@ static relopt_int intRelOpts[] =\r\n> {\r\n> \"parallel_workers\",\r\n> \"Number of parallel processes that can be used per executor node for this\r\n> relation.\",\r\n> - RELOPT_KIND_HEAP,\r\n> + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\r\n> ShareUpdateExclusiveLock\r\n> },\r\n> -1, 0, 1024\r\n> @@ -1962,12 +1962,18 @@ bytea *\r\n> partitioned_table_reloptions(Datum reloptions, bool validate) {\r\n> /*\r\n> - * There are no options for partitioned tables yet, but this is able to do\r\n> - * some validation.\r\n> + * Currently the only setting known to be useful for partitioned tables\r\n> + * is parallel_workers.\r\n> */\r\n> + static const relopt_parse_elt tab[] = { {\"parallel_workers\",\r\n> + RELOPT_TYPE_INT, offsetof(PartitionedTableRdOptions,\r\n> + parallel_workers)}, };\r\n> +\r\n> return (bytea *) build_reloptions(reloptions, validate,\r\n> RELOPT_KIND_PARTITIONED,\r\n> - 0, NULL, 0);\r\n> + sizeof(PartitionedTableRdOptions),\r\n> + tab, lengthof(tab));\r\n> }\r\n> \r\n> /*\r\n> \r\n> diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h index\r\n> 10b63982c0..fe114e0856 100644\r\n> --- a/src/include/utils/rel.h\r\n> +++ b/src/include/utils/rel.h\r\n> @@ -308,6 +308,16 @@ typedef struct StdRdOptions\r\n> bool vacuum_truncate; /* enables vacuum to truncate a relation */ }\r\n> StdRdOptions;\r\n> \r\n> +/*\r\n> + * PartitionedTableRdOptions\r\n> + * Contents of rd_options for partitioned tables */ typedef struct\r\n> +PartitionedTableRdOptions {\r\n> + int32 vl_len_; /* varlena header (do not touch directly!) */ int\r\n> +parallel_workers; /* max number of parallel workers */ }\r\n> +PartitionedTableRdOptions;\r\n> +\r\n> #define HEAP_MIN_FILLFACTOR 10\r\n> #define HEAP_DEFAULT_FILLFACTOR 100\r\nHi,\r\n\r\nI am not sure.\r\nIMO, for normal table, we use the following macro to get the parallel_workers:\r\n----------------------\r\n/*\r\n * RelationGetParallelWorkers\r\n *\t\tReturns the relation's parallel_workers reloption setting.\r\n *\t\tNote multiple eval of argument!\r\n */\r\n#define RelationGetParallelWorkers(relation, defaultpw) \\\r\n\t((relation)->rd_options ? \\\r\n\t ((StdRdOptions *) (relation)->rd_options)->parallel_workers : (defaultpw))\r\n----------------------\r\n\r\nSince we add new struct \" PartitionedTableRdOptions \", It seems we need to get parallel_workers in different way.\r\nDo we need similar macro to get partitioned table's parallel_workers ?\r\n\r\nLike:\r\n#define PartitionedTableGetParallelWorkers(relation, defaultpw) \\ \r\nxxx\r\n(PartitionedTableRdOptions *) (relation)->rd_options)->parallel_workers : (defaultpw))\r\n\r\nBest regards,\r\nhouzj\r\n\r\n", "msg_date": "Tue, 23 Feb 2021 10:24:29 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, Feb 23, 2021 at 7:24 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> > > It seems the patch does not include the code that get the\n> > > parallel_workers from new struct \" PartitionedTableRdOptions \", Did I miss\n> > something ?\n> >\n> > Aren't the following hunks in the v2 patch what you meant?\n> >\n> > diff --git a/src/backend/access/common/reloptions.c\n> > b/src/backend/access/common/reloptions.c\n> > index c687d3ee9e..f8443d2361 100644\n> > --- a/src/backend/access/common/reloptions.c\n> > +++ b/src/backend/access/common/reloptions.c\n> > @@ -377,7 +377,7 @@ static relopt_int intRelOpts[] =\n> > {\n> > \"parallel_workers\",\n> > \"Number of parallel processes that can be used per executor node for this\n> > relation.\",\n> > - RELOPT_KIND_HEAP,\n> > + RELOPT_KIND_HEAP | RELOPT_KIND_PARTITIONED,\n> > ShareUpdateExclusiveLock\n> > },\n> > -1, 0, 1024\n> > @@ -1962,12 +1962,18 @@ bytea *\n> > partitioned_table_reloptions(Datum reloptions, bool validate) {\n> > /*\n> > - * There are no options for partitioned tables yet, but this is able to do\n> > - * some validation.\n> > + * Currently the only setting known to be useful for partitioned tables\n> > + * is parallel_workers.\n> > */\n> > + static const relopt_parse_elt tab[] = { {\"parallel_workers\",\n> > + RELOPT_TYPE_INT, offsetof(PartitionedTableRdOptions,\n> > + parallel_workers)}, };\n> > +\n> > return (bytea *) build_reloptions(reloptions, validate,\n> > RELOPT_KIND_PARTITIONED,\n> > - 0, NULL, 0);\n> > + sizeof(PartitionedTableRdOptions),\n> > + tab, lengthof(tab));\n> > }\n> >\n> > /*\n> >\n> > diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h index\n> > 10b63982c0..fe114e0856 100644\n> > --- a/src/include/utils/rel.h\n> > +++ b/src/include/utils/rel.h\n> > @@ -308,6 +308,16 @@ typedef struct StdRdOptions\n> > bool vacuum_truncate; /* enables vacuum to truncate a relation */ }\n> > StdRdOptions;\n> >\n> > +/*\n> > + * PartitionedTableRdOptions\n> > + * Contents of rd_options for partitioned tables */ typedef struct\n> > +PartitionedTableRdOptions {\n> > + int32 vl_len_; /* varlena header (do not touch directly!) */ int\n> > +parallel_workers; /* max number of parallel workers */ }\n> > +PartitionedTableRdOptions;\n> > +\n> > #define HEAP_MIN_FILLFACTOR 10\n> > #define HEAP_DEFAULT_FILLFACTOR 100\n> Hi,\n>\n> I am not sure.\n> IMO, for normal table, we use the following macro to get the parallel_workers:\n> ----------------------\n> /*\n> * RelationGetParallelWorkers\n> * Returns the relation's parallel_workers reloption setting.\n> * Note multiple eval of argument!\n> */\n> #define RelationGetParallelWorkers(relation, defaultpw) \\\n> ((relation)->rd_options ? \\\n> ((StdRdOptions *) (relation)->rd_options)->parallel_workers : (defaultpw))\n> ----------------------\n>\n> Since we add new struct \" PartitionedTableRdOptions \", It seems we need to get parallel_workers in different way.\n> Do we need similar macro to get partitioned table's parallel_workers ?\n\nOh, you're right. The parallel_workers setting of a relation is only\naccessed through this macro, even for partitioned tables, and I can\nsee that it is actually wrong to access a partitioned table's\nparallel_workers through this macro as-is. Although I hadn't tested\nthat, so thanks for pointing that out.\n\n> Like:\n> #define PartitionedTableGetParallelWorkers(relation, defaultpw) \\\n> xxx\n> (PartitionedTableRdOptions *) (relation)->rd_options)->parallel_workers : (defaultpw))\n\nI'm thinking it would be better to just modify the existing macro to\ncheck relkind to decide which struct pointer type to cast the value in\nrd_options to.\n\nI will post an updated patch later.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Mar 2021 17:39:13 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Mon, 2021-03-01 at 17:39 +0900, Amit Langote wrote:\n> > I am not sure.\n> > IMO, for normal table, we use the following macro to get the parallel_workers:\n> > ----------------------\n> > /*\n> > * RelationGetParallelWorkers\n> > * Returns the relation's parallel_workers reloption setting.\n> > * Note multiple eval of argument!\n> > */\n> > #define RelationGetParallelWorkers(relation, defaultpw) \\\n> > ((relation)->rd_options ? \\\n> > ((StdRdOptions *) (relation)->rd_options)->parallel_workers : (defaultpw))\n> > ----------------------\n> > Since we add new struct \" PartitionedTableRdOptions \", It seems we need to get parallel_workers in different way.\n> > Do we need similar macro to get partitioned table's parallel_workers ?\n> \n> Oh, you're right. The parallel_workers setting of a relation is only\n> accessed through this macro, even for partitioned tables, and I can\n> see that it is actually wrong to access a partitioned table's\n> parallel_workers through this macro as-is. Although I hadn't tested\n> that, so thanks for pointing that out.\n> \n> > Like:\n> > #define PartitionedTableGetParallelWorkers(relation, defaultpw) \\\n> > xxx\n> > (PartitionedTableRdOptions *) (relation)->rd_options)->parallel_workers : (defaultpw))\n> \n> I'm thinking it would be better to just modify the existing macro to\n> check relkind to decide which struct pointer type to cast the value in\n> rd_options to.\n\nHere is an updated patch with this fix.\n\nI added regression tests and adapted the documentation a bit.\n\nI also added support for lowering the number of parallel workers.\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 01 Mar 2021 16:10:26 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, Mar 2, 2021 at 12:10 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> Here is an updated patch with this fix.\n\nThanks for updating the patch. I was about to post an updated version\nmyself but you beat me to it.\n\n> I added regression tests and adapted the documentation a bit.\n>\n> I also added support for lowering the number of parallel workers.\n\n+ALTER TABLE pagg_tab_ml SET (parallel_workers = 0);\n+EXPLAIN (COSTS OFF)\n+SELECT a FROM pagg_tab_ml WHERE b = 42;\n+ QUERY PLAN\n+---------------------------------------------------\n+ Append\n+ -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml_1\n+ Filter: (b = 42)\n+ -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n+ Filter: (b = 42)\n+ -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n+ Filter: (b = 42)\n+(7 rows)\n\nI got the same result with my implementation, but I am wondering if\nsetting parallel_workers=0 on the parent table shouldn't really\ndisable a regular (non-parallel-aware) Append running under Gather\neven if it does Parallel Append (parallel-aware)? So in this test\ncase, there should have been a Gather atop Append, with individual\npartitions scanned using Parallel Seq Scan where applicable.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Mar 2021 11:23:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, 2021-03-02 at 11:23 +0900, Amit Langote wrote:\n> +ALTER TABLE pagg_tab_ml SET (parallel_workers = 0);\n> +EXPLAIN (COSTS OFF)\n> +SELECT a FROM pagg_tab_ml WHERE b = 42;\n> + QUERY PLAN\n> +---------------------------------------------------\n> + Append\n> + -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml_1\n> + Filter: (b = 42)\n> + -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n> + Filter: (b = 42)\n> + -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n> + Filter: (b = 42)\n> +(7 rows)\n> \n> I got the same result with my implementation, but I am wondering if\n> setting parallel_workers=0 on the parent table shouldn't really\n> disable a regular (non-parallel-aware) Append running under Gather\n> even if it does Parallel Append (parallel-aware)? So in this test\n> case, there should have been a Gather atop Append, with individual\n> partitions scanned using Parallel Seq Scan where applicable.\n\nI am not sure, but I tend to think that if you specify no\nparallel workers, you want no parallel workers.\n\nBut I noticed the following:\n\n SET enable_partitionwise_aggregate = on;\n\n EXPLAIN (COSTS OFF)\n SELECT count(*) FROM pagg_tab_ml;\n QUERY PLAN \n ------------------------------------------------------------------------------\n Finalize Aggregate\n -> Gather\n Workers Planned: 4\n -> Parallel Append\n -> Partial Aggregate\n -> Parallel Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n -> Partial Aggregate\n -> Parallel Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_3\n -> Partial Aggregate\n -> Parallel Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_1\n -> Partial Aggregate\n -> Parallel Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_4\n -> Partial Aggregate\n -> Parallel Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_2\n (14 rows)\n\nThe default number of parallel workers is taken, because the append is\non an upper relation, not the partitioned table itself.\n\nOne would wish that \"parallel_workers\" somehow percolated up, but I\nhave no idea how that should work.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 02 Mar 2021 09:47:42 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Tue, Mar 2, 2021 at 5:47 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Tue, 2021-03-02 at 11:23 +0900, Amit Langote wrote:\n> > +ALTER TABLE pagg_tab_ml SET (parallel_workers = 0);\n> > +EXPLAIN (COSTS OFF)\n> > +SELECT a FROM pagg_tab_ml WHERE b = 42;\n> > + QUERY PLAN\n> > +---------------------------------------------------\n> > + Append\n> > + -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml_1\n> > + Filter: (b = 42)\n> > + -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n> > + Filter: (b = 42)\n> > + -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n> > + Filter: (b = 42)\n> > +(7 rows)\n> >\n> > I got the same result with my implementation, but I am wondering if\n> > setting parallel_workers=0 on the parent table shouldn't really\n> > disable a regular (non-parallel-aware) Append running under Gather\n> > even if it does Parallel Append (parallel-aware)? So in this test\n> > case, there should have been a Gather atop Append, with individual\n> > partitions scanned using Parallel Seq Scan where applicable.\n>\n> I am not sure, but I tend to think that if you specify no\n> parallel workers, you want no parallel workers.\n\nI am thinking that one would set parallel_workers on a parent\npartitioned table to control only how many workers a Parallel Append\ncan spread across partitions or use parallel_workers=0 to disable this\nform of partition parallelism. However, one may still want the\nindividual partitions to be scanned in parallel, where workers only\nspread across the partition's blocks. IMO, we should try to keep\nthose two forms of parallelism separately configurable.\n\n> But I noticed the following:\n>\n> SET enable_partitionwise_aggregate = on;\n>\n> EXPLAIN (COSTS OFF)\n> SELECT count(*) FROM pagg_tab_ml;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Finalize Aggregate\n> -> Gather\n> Workers Planned: 4\n> -> Parallel Append\n> -> Partial Aggregate\n> -> Parallel Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n> -> Partial Aggregate\n> -> Parallel Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_3\n> -> Partial Aggregate\n> -> Parallel Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_1\n> -> Partial Aggregate\n> -> Parallel Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_4\n> -> Partial Aggregate\n> -> Parallel Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_2\n> (14 rows)\n>\n> The default number of parallel workers is taken, because the append is\n> on an upper relation, not the partitioned table itself.\n>\n> One would wish that \"parallel_workers\" somehow percolated up,\n\nI would have liked that too.\n\n> but I\n> have no idea how that should work.\n\nIt appears that we don't set the fields of an upper relation such that\nIS_PARTITIONED_REL() would return true for it, like we do for base and\njoin relations. In compute_append_parallel_workers(), we're requiring\nit to be true to even look at the relation's rel_parallel_workers. We\ncan set those properties in *some* grouping rels, for example, when\nthe aggregation is grouped on the input relation's partition key.\nThat would make it possible for the Append on such grouping relations\nto refer to their input partitioned relation's rel_parallel_workers.\nFor example, with the attached PoC patch:\n\nSET parallel_setup_cost TO 0;\nSET max_parallel_workers_per_gather TO 8;\nSET enable_partitionwise_aggregate = on;\n\nalter table pagg_tab_ml set (parallel_workers=5);\n\nEXPLAIN (COSTS OFF) SELECT a, count(*) FROM pagg_tab_ml GROUP BY 1;\n QUERY PLAN\n---------------------------------------------------------------------\n Gather\n Workers Planned: 5\n -> Parallel Append\n -> HashAggregate\n Group Key: pagg_tab_ml_5.a\n -> Append\n -> Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_5\n -> Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_6\n -> HashAggregate\n Group Key: pagg_tab_ml.a\n -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n -> HashAggregate\n Group Key: pagg_tab_ml_2.a\n -> Append\n -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n(16 rows)\n\nalter table pagg_tab_ml set (parallel_workers=0);\n\nEXPLAIN (COSTS OFF) SELECT a, count(*) FROM pagg_tab_ml GROUP BY 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Append\n -> Finalize GroupAggregate\n Group Key: pagg_tab_ml.a\n -> Gather Merge\n Workers Planned: 1\n -> Sort\n Sort Key: pagg_tab_ml.a\n -> Partial HashAggregate\n Group Key: pagg_tab_ml.a\n -> Parallel Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n -> Finalize GroupAggregate\n Group Key: pagg_tab_ml_2.a\n -> Gather Merge\n Workers Planned: 2\n -> Sort\n Sort Key: pagg_tab_ml_2.a\n -> Parallel Append\n -> Partial HashAggregate\n Group Key: pagg_tab_ml_2.a\n -> Parallel Seq Scan on\npagg_tab_ml_p2_s1 pagg_tab_ml_2\n -> Partial HashAggregate\n Group Key: pagg_tab_ml_3.a\n -> Parallel Seq Scan on\npagg_tab_ml_p2_s2 pagg_tab_ml_3\n -> Finalize GroupAggregate\n Group Key: pagg_tab_ml_5.a\n -> Gather Merge\n Workers Planned: 2\n -> Sort\n Sort Key: pagg_tab_ml_5.a\n -> Parallel Append\n -> Partial HashAggregate\n Group Key: pagg_tab_ml_5.a\n -> Parallel Seq Scan on\npagg_tab_ml_p3_s1 pagg_tab_ml_5\n -> Partial HashAggregate\n Group Key: pagg_tab_ml_6.a\n -> Parallel Seq Scan on\npagg_tab_ml_p3_s2 pagg_tab_ml_6\n(36 rows)\n\nalter table pagg_tab_ml set (parallel_workers=9);\n\nEXPLAIN (COSTS OFF) SELECT a, count(*) FROM pagg_tab_ml GROUP BY 1;\n QUERY PLAN\n---------------------------------------------------------------------\n Gather\n Workers Planned: 8\n -> Parallel Append\n -> HashAggregate\n Group Key: pagg_tab_ml_5.a\n -> Append\n -> Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_5\n -> Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_6\n -> HashAggregate\n Group Key: pagg_tab_ml.a\n -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n -> HashAggregate\n Group Key: pagg_tab_ml_2.a\n -> Append\n -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n(16 rows)\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 Mar 2021 17:58:32 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Wed, 2021-03-03 at 17:58 +0900, Amit Langote wrote:\n> On Tue, Mar 2, 2021 at 5:47 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > On Tue, 2021-03-02 at 11:23 +0900, Amit Langote wrote:\n> > > I got the same result with my implementation, but I am wondering if\n> > > setting parallel_workers=0 on the parent table shouldn't really\n> > > disable a regular (non-parallel-aware) Append running under Gather\n> > > even if it does Parallel Append (parallel-aware)? So in this test\n> > > case, there should have been a Gather atop Append, with individual\n> > > partitions scanned using Parallel Seq Scan where applicable.\n> > \n> > I am not sure, but I tend to think that if you specify no\n> > parallel workers, you want no parallel workers.\n> \n> I am thinking that one would set parallel_workers on a parent\n> partitioned table to control only how many workers a Parallel Append\n> can spread across partitions or use parallel_workers=0 to disable this\n> form of partition parallelism. However, one may still want the\n> individual partitions to be scanned in parallel, where workers only\n> spread across the partition's blocks. IMO, we should try to keep\n> those two forms of parallelism separately configurable.\n\nI see your point.\n\nI thought that PostgreSQL might consider such a plan anyway, but\nI am not deep enough into the partitioning code to know.\n\nThinking this further, wouldn't that mean that we get into a\nconflict if someone sets \"parallel_workers\" on both a partition and\nthe partitioned table? Which setting should win?\n\n> > SET enable_partitionwise_aggregate = on;\n> > \n> > EXPLAIN (COSTS OFF)\n> > SELECT count(*) FROM pagg_tab_ml;\n> > QUERY PLAN\n> > ------------------------------------------------------------------------------\n> > Finalize Aggregate\n> > -> Gather\n> > Workers Planned: 4\n> > -> Parallel Append\n> > -> Partial Aggregate\n> > -> Parallel Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n> > -> Partial Aggregate\n> > -> Parallel Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_3\n> > -> Partial Aggregate\n> > -> Parallel Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_1\n> > -> Partial Aggregate\n> > -> Parallel Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_4\n> > -> Partial Aggregate\n> > -> Parallel Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_2\n> > (14 rows)\n> > \n> > The default number of parallel workers is taken, because the append is\n> > on an upper relation, not the partitioned table itself.\n> > \n> > One would wish that \"parallel_workers\" somehow percolated up,\n> \n> It appears that we don't set the fields of an upper relation such that\n> IS_PARTITIONED_REL() would return true for it, like we do for base and\n> join relations. In compute_append_parallel_workers(), we're requiring\n> it to be true to even look at the relation's rel_parallel_workers. We\n> can set those properties in *some* grouping rels, for example, when\n> the aggregation is grouped on the input relation's partition key.\n> That would make it possible for the Append on such grouping relations\n> to refer to their input partitioned relation's rel_parallel_workers.\n> For example, with the attached PoC patch:\n> \n> SET parallel_setup_cost TO 0;\n> SET max_parallel_workers_per_gather TO 8;\n> SET enable_partitionwise_aggregate = on;\n> \n> alter table pagg_tab_ml set (parallel_workers=5);\n> \n> EXPLAIN (COSTS OFF) SELECT a, count(*) FROM pagg_tab_ml GROUP BY 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Gather\n> Workers Planned: 5\n> -> Parallel Append\n> -> HashAggregate\n> Group Key: pagg_tab_ml_5.a\n> -> Append\n> -> Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_5\n> -> Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_6\n> -> HashAggregate\n> Group Key: pagg_tab_ml.a\n> -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n> -> HashAggregate\n> Group Key: pagg_tab_ml_2.a\n> -> Append\n> -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n> -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n> (16 rows)\n> \n> alter table pagg_tab_ml set (parallel_workers=0);\n> \n> EXPLAIN (COSTS OFF) SELECT a, count(*) FROM pagg_tab_ml GROUP BY 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------\n> Append\n> -> Finalize GroupAggregate\n> Group Key: pagg_tab_ml.a\n> -> Gather Merge\n> Workers Planned: 1\n> -> Sort\n> Sort Key: pagg_tab_ml.a\n> -> Partial HashAggregate\n> Group Key: pagg_tab_ml.a\n> -> Parallel Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n> -> Finalize GroupAggregate\n> Group Key: pagg_tab_ml_2.a\n> -> Gather Merge\n> Workers Planned: 2\n> -> Sort\n> Sort Key: pagg_tab_ml_2.a\n> -> Parallel Append\n> -> Partial HashAggregate\n> Group Key: pagg_tab_ml_2.a\n> -> Parallel Seq Scan on\n> pagg_tab_ml_p2_s1 pagg_tab_ml_2\n> -> Partial HashAggregate\n> Group Key: pagg_tab_ml_3.a\n> -> Parallel Seq Scan on\n> pagg_tab_ml_p2_s2 pagg_tab_ml_3\n> -> Finalize GroupAggregate\n> Group Key: pagg_tab_ml_5.a\n> -> Gather Merge\n> Workers Planned: 2\n> -> Sort\n> Sort Key: pagg_tab_ml_5.a\n> -> Parallel Append\n> -> Partial HashAggregate\n> Group Key: pagg_tab_ml_5.a\n> -> Parallel Seq Scan on\n> pagg_tab_ml_p3_s1 pagg_tab_ml_5\n> -> Partial HashAggregate\n> Group Key: pagg_tab_ml_6.a\n> -> Parallel Seq Scan on\n> pagg_tab_ml_p3_s2 pagg_tab_ml_6\n> (36 rows)\n> \n> alter table pagg_tab_ml set (parallel_workers=9);\n> \n> EXPLAIN (COSTS OFF) SELECT a, count(*) FROM pagg_tab_ml GROUP BY 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> Gather\n> Workers Planned: 8\n> -> Parallel Append\n> -> HashAggregate\n> Group Key: pagg_tab_ml_5.a\n> -> Append\n> -> Seq Scan on pagg_tab_ml_p3_s1 pagg_tab_ml_5\n> -> Seq Scan on pagg_tab_ml_p3_s2 pagg_tab_ml_6\n> -> HashAggregate\n> Group Key: pagg_tab_ml.a\n> -> Seq Scan on pagg_tab_ml_p1 pagg_tab_ml\n> -> HashAggregate\n> Group Key: pagg_tab_ml_2.a\n> -> Append\n> -> Seq Scan on pagg_tab_ml_p2_s1 pagg_tab_ml_2\n> -> Seq Scan on pagg_tab_ml_p2_s2 pagg_tab_ml_3\n> (16 rows)\n\nThat looks good!\n\nOne could imagine similar behavior for partitionwise joins, but\nit might be difficult to decide which side should determine the\nnumber of parallel workers.\n\nI think that with this addition, this patch would make a useful improvement.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 03 Mar 2021 17:20:19 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Wed, 2021-03-03 at 17:58 +0900, Amit Langote wrote:\n> For example, with the attached PoC patch:\n\nI have incorporated your POC patch and added a regression test.\n\nI didn't test it thoroughly though.\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 05 Mar 2021 14:47:34 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, Mar 5, 2021 at 10:47 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Wed, 2021-03-03 at 17:58 +0900, Amit Langote wrote:\n> > For example, with the attached PoC patch:\n>\n> I have incorporated your POC patch and added a regression test.\n>\n> I didn't test it thoroughly though.\n\nThanks. Although, I wonder if we should rather consider it a\nstandalone patch to fix a partition planning code deficiency.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Mar 2021 22:55:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, 2021-03-05 at 22:55 +0900, Amit Langote wrote:\n> On Fri, Mar 5, 2021 at 10:47 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > On Wed, 2021-03-03 at 17:58 +0900, Amit Langote wrote:\n> > > For example, with the attached PoC patch:\n> > \n> > I have incorporated your POC patch and added a regression test.\n> > \n> > I didn't test it thoroughly though.\n> \n> Thanks. Although, I wonder if we should rather consider it a\n> standalone patch to fix a partition planning code deficiency.\n\nOh - I didn't realize that your patch was independent.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 05 Mar 2021 15:06:11 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, Mar 5, 2021 at 11:06 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Fri, 2021-03-05 at 22:55 +0900, Amit Langote wrote:\n> > On Fri, Mar 5, 2021 at 10:47 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > On Wed, 2021-03-03 at 17:58 +0900, Amit Langote wrote:\n> > > > For example, with the attached PoC patch:\n> > >\n> > > I have incorporated your POC patch and added a regression test.\n> > >\n> > > I didn't test it thoroughly though.\n> >\n> > Thanks. Although, I wonder if we should rather consider it a\n> > standalone patch to fix a partition planning code deficiency.\n>\n> Oh - I didn't realize that your patch was independent.\n\nAttached a new version rebased over c8f78b616, with the grouping\nrelation partitioning enhancements as a separate patch 0001. Sorry\nabout the delay.\n\nI'd also like to change compute_append_parallel_workers(), as also\nmentioned upthread, such that disabling Parallel Append by setting\nparallel_workers=0 on a parent partitioned table does not also disable\nthe partitions themselves being scanned in parallel even though under\nan Append. I didn't get time today to work on that though.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Mar 2021 22:06:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, 19 Mar 2021 at 02:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached a new version rebased over c8f78b616, with the grouping\n> relation partitioning enhancements as a separate patch 0001. Sorry\n> about the delay.\n\nI had a quick look at this and wondered if the partitioned table's\nparallel workers shouldn't be limited to the sum of the parallel\nworkers of the Append's subpaths?\n\nIt seems a bit weird to me that the following case requests 4 workers:\n\n# create table lp (a int) partition by list(a);\n# create table lp1 partition of lp for values in(1);\n# insert into lp select 1 from generate_series(1,10000000) x;\n# alter table lp1 set (parallel_workers = 2);\n# alter table lp set (parallel_workers = 4);\n# set max_parallel_workers_per_Gather = 8;\n# explain select count(*) from lp;\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=97331.63..97331.64 rows=1 width=8)\n -> Gather (cost=97331.21..97331.62 rows=4 width=8)\n Workers Planned: 4\n -> Partial Aggregate (cost=96331.21..96331.22 rows=1 width=8)\n -> Parallel Seq Scan on lp1 lp (cost=0.00..85914.57\nrows=4166657 width=0)\n(5 rows)\n\nI can see a good argument that there should only be 2 workers here.\n\nIf someone sets the partitioned table's parallel_workers high so that\nthey get a large number of workers when no partitions are pruned\nduring planning, do they really want the same number of workers in\nqueries where a large number of partitions are pruned?\n\nThis problem gets a bit more complex in generic plans where the\nplanner can't prune anything but run-time pruning prunes many\npartitions. I'm not so sure what to do about that, but the problem\ndoes exist today to a lesser extent with the current method of\ndetermining the append parallel workers.\n\nDavid\n\n\n", "msg_date": "Wed, 24 Mar 2021 14:14:48 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Wed, 2021-03-24 at 14:14 +1300, David Rowley wrote:\n> On Fri, 19 Mar 2021 at 02:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached a new version rebased over c8f78b616, with the grouping\n> > relation partitioning enhancements as a separate patch 0001. Sorry\n> > about the delay.\n> \n> I had a quick look at this and wondered if the partitioned table's\n> parallel workers shouldn't be limited to the sum of the parallel\n> workers of the Append's subpaths?\n> \n> It seems a bit weird to me that the following case requests 4 workers:\n> \n> # create table lp (a int) partition by list(a);\n> # create table lp1 partition of lp for values in(1);\n> # insert into lp select 1 from generate_series(1,10000000) x;\n> # alter table lp1 set (parallel_workers = 2);\n> # alter table lp set (parallel_workers = 4);\n> # set max_parallel_workers_per_Gather = 8;\n> # explain select count(*) from lp;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=97331.63..97331.64 rows=1 width=8)\n> -> Gather (cost=97331.21..97331.62 rows=4 width=8)\n> Workers Planned: 4\n> -> Partial Aggregate (cost=96331.21..96331.22 rows=1 width=8)\n> -> Parallel Seq Scan on lp1 lp (cost=0.00..85914.57\n> rows=4166657 width=0)\n> (5 rows)\n> \n> I can see a good argument that there should only be 2 workers here.\n\nGood point, I agree.\n\n> If someone sets the partitioned table's parallel_workers high so that\n> they get a large number of workers when no partitions are pruned\n> during planning, do they really want the same number of workers in\n> queries where a large number of partitions are pruned?\n> \n> This problem gets a bit more complex in generic plans where the\n> planner can't prune anything but run-time pruning prunes many\n> partitions. I'm not so sure what to do about that, but the problem\n> does exist today to a lesser extent with the current method of\n> determining the append parallel workers.\n\nAlso a good point. That would require changing the actual number of\nparallel workers at execution time, but that is tricky.\nIf we go with your suggestion above, we'd have to disambiguate if\nthe number of workers is set because a partition is large enough\nto warrant a parallel scan (then it shouldn't be reduced if the executor\nprunes partitions) or if it is because of the number of partitions\n(then it should be reduced).\n\nCurrently, we don't reduce parallelism if the executor prunes\npartitions, so this could be seen as an independent problem.\n\nI don't know if Seamus is still working on that; if not, we might\nmark it as \"returned with feedback\".\n\nPerhaps Amit's patch 0001 should go in independently.\n\nI'll mark the patch as \"waiting for author\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 02 Apr 2021 16:36:27 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, Apr 2, 2021 at 11:36 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Wed, 2021-03-24 at 14:14 +1300, David Rowley wrote:\n> > On Fri, 19 Mar 2021 at 02:07, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Attached a new version rebased over c8f78b616, with the grouping\n> > > relation partitioning enhancements as a separate patch 0001. Sorry\n> > > about the delay.\n> >\n> > I had a quick look at this and wondered if the partitioned table's\n> > parallel workers shouldn't be limited to the sum of the parallel\n> > workers of the Append's subpaths?\n> >\n> > It seems a bit weird to me that the following case requests 4 workers:\n> >\n> > # create table lp (a int) partition by list(a);\n> > # create table lp1 partition of lp for values in(1);\n> > # insert into lp select 1 from generate_series(1,10000000) x;\n> > # alter table lp1 set (parallel_workers = 2);\n> > # alter table lp set (parallel_workers = 4);\n> > # set max_parallel_workers_per_Gather = 8;\n> > # explain select count(*) from lp;\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------\n> > Finalize Aggregate (cost=97331.63..97331.64 rows=1 width=8)\n> > -> Gather (cost=97331.21..97331.62 rows=4 width=8)\n> > Workers Planned: 4\n> > -> Partial Aggregate (cost=96331.21..96331.22 rows=1 width=8)\n> > -> Parallel Seq Scan on lp1 lp (cost=0.00..85914.57\n> > rows=4166657 width=0)\n> > (5 rows)\n> >\n> > I can see a good argument that there should only be 2 workers here.\n>\n> Good point, I agree.\n>\n> > If someone sets the partitioned table's parallel_workers high so that\n> > they get a large number of workers when no partitions are pruned\n> > during planning, do they really want the same number of workers in\n> > queries where a large number of partitions are pruned?\n> >\n> > This problem gets a bit more complex in generic plans where the\n> > planner can't prune anything but run-time pruning prunes many\n> > partitions. I'm not so sure what to do about that, but the problem\n> > does exist today to a lesser extent with the current method of\n> > determining the append parallel workers.\n>\n> Also a good point. That would require changing the actual number of\n> parallel workers at execution time, but that is tricky.\n> If we go with your suggestion above, we'd have to disambiguate if\n> the number of workers is set because a partition is large enough\n> to warrant a parallel scan (then it shouldn't be reduced if the executor\n> prunes partitions) or if it is because of the number of partitions\n> (then it should be reduced).\n\nMaybe we really want a parallel_append_workers for partitioned tables,\ninstead of piggybacking on parallel_workers?\n\n> I don't know if Seamus is still working on that; if not, we might\n> mark it as \"returned with feedback\".\n\nI have to agree given the time left.\n\n> Perhaps Amit's patch 0001 should go in independently.\n\nPerhaps, but maybe we should wait until something really needs that.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 16:46:26 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "> On 6 Apr 2021, at 09:46, Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Apr 2, 2021 at 11:36 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n>> I don't know if Seamus is still working on that; if not, we might\n>> mark it as \"returned with feedback\".\n> \n> I have to agree given the time left.\n\nThis thread has stalled and the patch no longer applies. I propose that we\nmark this Returned with Feedback, is that Ok with you Amit?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 3 Sep 2021 18:24:04 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Fri, 2021-09-03 at 18:24 +0200, Daniel Gustafsson wrote:\n> > On 6 Apr 2021, at 09:46, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Apr 2, 2021 at 11:36 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> > > I don't know if Seamus is still working on that; if not, we might\n> > > mark it as \"returned with feedback\".\n> > \n> > I have to agree given the time left.\n> \n> This thread has stalled and the patch no longer applies.  I propose that we\n> mark this Returned with Feedback, is that Ok with you Amit?\n\n+1. That requires more thought.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 03 Sep 2021 20:10:09 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "On Sat, Sep 4, 2021 at 3:10 Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Fri, 2021-09-03 at 18:24 +0200, Daniel Gustafsson wrote:\n> > > On 6 Apr 2021, at 09:46, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Fri, Apr 2, 2021 at 11:36 PM Laurenz Albe <laurenz.albe@cybertec.at>\n> wrote:\n> >\n> > > > I don't know if Seamus is still working on that; if not, we might\n> > > > mark it as \"returned with feedback\".\n> > >\n> > > I have to agree given the time left.\n> >\n> > This thread has stalled and the patch no longer applies. I propose that\n> we\n> > mark this Returned with Feedback, is that Ok with you Amit?\n>\n> +1. That requires more thought.\n\n\nYes, I think so too.\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\nOn Sat, Sep 4, 2021 at 3:10 Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Fri, 2021-09-03 at 18:24 +0200, Daniel Gustafsson wrote:\n> > On 6 Apr 2021, at 09:46, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Apr 2, 2021 at 11:36 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> > > I don't know if Seamus is still working on that; if not, we might\n> > > mark it as \"returned with feedback\".\n> > \n> > I have to agree given the time left.\n> \n> This thread has stalled and the patch no longer applies.  I propose that we\n> mark this Returned with Feedback, is that Ok with you Amit?\n\n+1.  That requires more thought.Yes, I think so too.-- Amit LangoteEDB: http://www.enterprisedb.com", "msg_date": "Sat, 4 Sep 2021 08:17:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" }, { "msg_contents": "> On 4 Sep 2021, at 01:17, Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sat, Sep 4, 2021 at 3:10 Laurenz Albe <laurenz.albe@cybertec.at <mailto:laurenz.albe@cybertec.at>> wrote:\n> On Fri, 2021-09-03 at 18:24 +0200, Daniel Gustafsson wrote:\n\n> > This thread has stalled and the patch no longer applies. I propose that we\n> > mark this Returned with Feedback, is that Ok with you Amit?\n> \n> +1. That requires more thought.\n> \n> Yes, I think so too.\n\nDone that way, it can always be resubmitted in a later CF.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 4 Sep 2021 08:14:49 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: A reloption for partitioned tables - parallel_workers" } ]
[ { "msg_contents": "Hi,\n\nProblem description:\nWhile working on a homegrown limited solution to replace (a very limited set of) golden gate capabilities we have created a CDC solution using the WAL capabilities.\n\nThe data flows like this:\nPG1 --> Debezium(wal2json) --> Kafka1 --> MM2 --> Kafka2 --> Kafka Connect Sink Plugin --> PG2\nAnd we wanted also changes to flow the other direction as well:\nPG1 <-- Kafka Connect Sink Plugin <-- Kafka1 <-- MM2 <-- Kafka2 <-- Debezium(wal2json) <-- PG2\n\nWhere our homegrown \"Kafka Connect Sink Plugin\" will do manipulations on replicated data.\n\nHow do we prevent cyclic replication in this case?\n\nLooking around I came across this nice explanation:\n\nhttps://www.highgo.ca/2020/04/18/the-origin-in-postgresql-logical-decoding/\n\nUsing the origin to filter records in the wal2json works perfect once we set up an origin.\n\nBut, calling pg_replication_origin_session_setup requires superuser privileges. Our intent is to make this call when starting a write session in the \"Kafka Connect Sink Plugin\" that writes data to PG.\n\nThe logical replication is usually done on the replication channel rather than the normal user space session so I see the reason for requiring superuser. This is aligned with the documentation, so this is not a bug per se.\n\nIn my mind the requirement for superuser is too strong. I think that requiring privileges of a replication user is more suitable. This way we can require that only a user with replication privileges will actually do replication, even if this is not really a replication.\n\nTaking it one step further, I see no reason why stamping a session with origin requires elevated privileges at all, but don't know enough about this.\n\nZohar Gofer\n\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service <https://www.amdocs.com/about/email-terms-of-service>\n\n\n\n\n\n\n\n\n\nHi,\n \nProblem description:\nWhile working on a homegrown limited solution to replace (a very limited set of) golden gate capabilities we have created a CDC solution using the WAL capabilities.\n \nThe data flows like this:\nPG1 à Debezium(wal2json)\nà Kafka1\nà MM2\nà Kafka2\nà Kafka Connect Sink Plugin\nà PG2\nAnd we wanted also changes to flow the other direction as well:\nPG1 ß Kafka Connect Sink Plugin\nß Kafka1\nß MM2\nß Kafka2\nß  Debezium(wal2json)\nß PG2\n \nWhere our homegrown “Kafka Connect Sink Plugin\" will do manipulations on replicated data.\n \nHow do we prevent cyclic replication in this case?\n \nLooking around I came across this nice explanation:\n \nhttps://www.highgo.ca/2020/04/18/the-origin-in-postgresql-logical-decoding/\n \nUsing the origin to filter records in the wal2json works perfect once we set up an origin.\n \nBut, calling pg_replication_origin_session_setup requires superuser privileges. Our intent is to make this call when starting a write session in the “Kafka Connect Sink Plugin\" that writes data to PG.\n \nThe logical replication is usually done on the replication channel rather than the normal user space session so I see the reason for requiring superuser. This is aligned with the documentation, so this is\n not a bug per se.\n \nIn my mind the requirement for superuser is too strong. I think that requiring privileges of a replication user is more suitable. This way we can require that only a user with replication privileges will actually\n do replication, even if this is not really a replication.\n \nTaking it one step further, I see no reason why stamping a session with origin requires elevated privileges at all, but don’t know enough about this.\n \nZohar Gofer\n \n\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service", "msg_date": "Mon, 15 Feb 2021 09:37:53 +0000", "msg_from": "Zohar Gofer <Zohar.Gofer@amdocs.com>", "msg_from_op": true, "msg_subject": "pg_replication_origin_session_setup and superuser" }, { "msg_contents": "On Mon, Feb 15, 2021 at 09:37:53AM +0000, Zohar Gofer wrote:\n> In my mind the requirement for superuser is too strong. I think that\n> requiring privileges of a replication user is more suitable. This\n> way we can require that only a user with replication privileges will\n> actually do replication, even if this is not really a replication.\n\nPostgreSQL 14 will remove those hardcoded superuser checks. Please\nsee this thread:\nhttps://www.postgresql.org/message-id/CAPdiE1xJMZOKQL3dgHMUrPqysZkgwzSMXETfKkHYnBAB7-0VRQ@mail.gmail.com\nAnd its related commit:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cc072641d41c55c6aa24a331fc1f8029e0a8d799\n\nWhile the default is still superuser-only, it becomes possible to\ngrant access to this stuff to other roles that have no need to be\nsuperusers.\n--\nMichael", "msg_date": "Tue, 16 Feb 2021 09:51:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_session_setup and superuser" }, { "msg_contents": "Thanks. This seems to be the fix we need.\nWould it be possible to push it to previous versions? 12 or 13?\n\nZohar\n\n-----Original Message-----\nFrom: Michael Paquier <michael@paquier.xyz> \nSent: Tuesday, February 16, 2021 2:52 AM\nTo: Zohar Gofer <Zohar.Gofer@amdocs.com>\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: pg_replication_origin_session_setup and superuser\n\nOn Mon, Feb 15, 2021 at 09:37:53AM +0000, Zohar Gofer wrote:\n> In my mind the requirement for superuser is too strong. I think that \n> requiring privileges of a replication user is more suitable. This way \n> we can require that only a user with replication privileges will \n> actually do replication, even if this is not really a replication.\n\nPostgreSQL 14 will remove those hardcoded superuser checks. Please see this thread:\nhttps://www.postgresql.org/message-id/CAPdiE1xJMZOKQL3dgHMUrPqysZkgwzSMXETfKkHYnBAB7-0VRQ@mail.gmail.com\nAnd its related commit:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cc072641d41c55c6aa24a331fc1f8029e0a8d799\n\nWhile the default is still superuser-only, it becomes possible to grant access to this stuff to other roles that have no need to be superusers.\n--\nMichael\nThis email and the information contained herein is proprietary and confidential and subject to the Amdocs Email Terms of Service, which you may review at https://www.amdocs.com/about/email-terms-of-service <https://www.amdocs.com/about/email-terms-of-service>\n\n\n\n", "msg_date": "Tue, 16 Feb 2021 07:54:32 +0000", "msg_from": "Zohar Gofer <Zohar.Gofer@amdocs.com>", "msg_from_op": true, "msg_subject": "RE: pg_replication_origin_session_setup and superuser" }, { "msg_contents": "On Tue, Feb 16, 2021 at 07:54:32AM +0000, Zohar Gofer wrote:\n> Thanks. This seems to be the fix we need.\n> Would it be possible to push it to previous versions? 12 or 13?\n\nNew features don't go into stable branches, only bug fixes do. And\nthis is not a bug fix, but a feature.\n--\nMichael", "msg_date": "Tue, 16 Feb 2021 17:14:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_replication_origin_session_setup and superuser" } ]
[ { "msg_contents": "A customer asked about including Server Name Indication (SNI) into the \nSSL connection from the client, so they can use an SSL-aware proxy to \nroute connections. There was a thread a few years ago where this was \nbriefly discussed but no patch appeared.[0] I whipped up a quick patch \nand it did seem to do the job, so I figured I'd share it here.\n\nThe question I had was whether this should be an optional behavior, or \nconversely a behavior that can be turned off, or whether it should just \nbe turned on all the time.\n\nTechnically, it seems pretty harmless. It adds another field to the TLS \nhandshake, and if the server is not interested in it, it just gets ignored.\n\nThe Wikipedia page[1] discusses some privacy concerns in the context of \nweb browsing, but it seems there is no principled solution to those. \nThe relevant RFC[2] \"recommends\" that SNI is used for all applicable TLS \nconnections.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAPPwrB_tsOw8MtVaA_DFyOFRY2ohNdvMnLoA_JRr3yB67Rggmg%40mail.gmail.com\n[1]: https://en.wikipedia.org/wiki/Server_Name_Indication\n[2]: https://tools.ietf.org/html/rfc6066#section-3", "msg_date": "Mon, 15 Feb 2021 15:09:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "SSL SNI" }, { "msg_contents": "On Mon, 15 Feb 2021 at 15:09, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> A customer asked about including Server Name Indication (SNI) into the\n> SSL connection from the client, so they can use an SSL-aware proxy to\n> route connections. There was a thread a few years ago where this was\n> briefly discussed but no patch appeared.[0] I whipped up a quick patch\n> and it did seem to do the job, so I figured I'd share it here.\n\nThe same topic of SSL-aware proxying based on SNI was mentioned in a\nmore recent thread here [0]. The state of that patch is unclear,\nthough. Other than that, this feature seems useful.\n\n\n+ /*\n+ * Set Server Name Indication (SNI), but not if it's a literal IP address.\n+ * (RFC 6066)\n+ */\n+ if (!((conn->pghost[0] >= '0' && conn->pghost[0] <= '9') ||\nstrchr(conn->pghost, ':')))\n\n'1one.example.com' is a valid hostname, but would fail this trivial\ntest, and thus would not have SNI enabled on its connection.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n[0] https://www.postgresql.org/message-id/flat/37846a5e-bb5e-0c4f-3ee8-54fb4bd02fab%40gmx.de\n\n\n", "msg_date": "Mon, 15 Feb 2021 15:28:23 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "Hi Peter,\nI imagine this also (finally) opens up the possibility for the server\nto present a different certificate for each hostname based on SNI.\nThis eliminates the requirement for wildcard certs where the cluster\nis running on a host with multiple (typically two to three) hostnames\nand the clients check the hostname against SAN in the cert\n(sslmode=verify-full). Am I right? Is that feature on anybody's\nroadmap?\n\nCheers,\nJesse\n\n\n\nOn Mon, Feb 15, 2021 at 6:09 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> A customer asked about including Server Name Indication (SNI) into the\n> SSL connection from the client, so they can use an SSL-aware proxy to\n> route connections. There was a thread a few years ago where this was\n> briefly discussed but no patch appeared.[0] I whipped up a quick patch\n> and it did seem to do the job, so I figured I'd share it here.\n>\n> The question I had was whether this should be an optional behavior, or\n> conversely a behavior that can be turned off, or whether it should just\n> be turned on all the time.\n>\n> Technically, it seems pretty harmless. It adds another field to the TLS\n> handshake, and if the server is not interested in it, it just gets ignored.\n>\n> The Wikipedia page[1] discusses some privacy concerns in the context of\n> web browsing, but it seems there is no principled solution to those.\n> The relevant RFC[2] \"recommends\" that SNI is used for all applicable TLS\n> connections.\n>\n>\n> [0]:\n> https://www.postgresql.org/message-id/flat/CAPPwrB_tsOw8MtVaA_DFyOFRY2ohNdvMnLoA_JRr3yB67Rggmg%40mail.gmail.com\n> [1]: https://en.wikipedia.org/wiki/Server_Name_Indication\n> [2]: https://tools.ietf.org/html/rfc6066#section-3\n\n\n", "msg_date": "Mon, 15 Feb 2021 09:40:10 -0800", "msg_from": "Jesse Zhang <sbjesse@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 2021-02-15 18:40, Jesse Zhang wrote:\n> I imagine this also (finally) opens up the possibility for the server\n> to present a different certificate for each hostname based on SNI.\n> This eliminates the requirement for wildcard certs where the cluster\n> is running on a host with multiple (typically two to three) hostnames\n> and the clients check the hostname against SAN in the cert\n> (sslmode=verify-full). Am I right? Is that feature on anybody's\n> roadmap?\n\nThis would be the client side of that. But I don't know of anyone \nplanning to work on the server side.\n\n\n", "msg_date": "Mon, 15 Feb 2021 20:24:56 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On Mon, 2021-02-15 at 15:09 +0100, Peter Eisentraut wrote:\r\n> The question I had was whether this should be an optional behavior, or \r\n> conversely a behavior that can be turned off, or whether it should just \r\n> be turned on all the time.\r\n\r\nPersonally I think there should be a toggle, so that any users for whom\r\nhostnames are potentially sensitive don't have to make that information\r\navailable on the wire. Opt-in, to avoid having any new information\r\ndisclosure after a version upgrade?\r\n\r\n> The Wikipedia page[1] discusses some privacy concerns in the context of \r\n> web browsing, but it seems there is no principled solution to those.\r\n\r\nI think Encrypted Client Hello is the new-and-improved Encrypted SNI,\r\nand it's on the very bleeding edge. You'd need to load a public key\r\ninto the client using some out-of-band communication -- e.g. browsers\r\nwould use DNS-over-TLS, but it might not make sense for a Postgres\r\nclient to use that same system.\r\n\r\nNSS will probably be receiving any final implementation before OpenSSL,\r\nif I had to guess, since Mozilla is driving pieces of the\r\nimplementation.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 16 Feb 2021 23:01:36 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 15.02.21 15:28, Matthias van de Meent wrote:\n> + /*\n> + * Set Server Name Indication (SNI), but not if it's a literal IP address.\n> + * (RFC 6066)\n> + */\n> + if (!((conn->pghost[0] >= '0' && conn->pghost[0] <= '9') ||\n> strchr(conn->pghost, ':')))\n> \n> '1one.example.com' is a valid hostname, but would fail this trivial\n> test, and thus would not have SNI enabled on its connection.\n\nHere is an updated patch that fixes this. If there are other ideas for \nhow to tell apart literal IP addresses from host names that are less ad \nhoc, I would welcome them.", "msg_date": "Thu, 25 Feb 2021 16:58:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 17.02.21 00:01, Jacob Champion wrote:\n> On Mon, 2021-02-15 at 15:09 +0100, Peter Eisentraut wrote:\n>> The question I had was whether this should be an optional behavior, or\n>> conversely a behavior that can be turned off, or whether it should just\n>> be turned on all the time.\n> Personally I think there should be a toggle, so that any users for whom\n> hostnames are potentially sensitive don't have to make that information\n> available on the wire. Opt-in, to avoid having any new information\n> disclosure after a version upgrade?\n\nJust as additional data points, it has come to my attention that both \nthe Go driver (\"lib/pq\") and the JDBC environment already send SNI \nautomatically. (In the case of JDBC this is done by the Java system \nlibraries, not the JDBC driver implementation.)\n\n\n", "msg_date": "Thu, 25 Feb 2021 17:00:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On Thu, 2021-02-25 at 17:00 +0100, Peter Eisentraut wrote:\r\n> Just as additional data points, it has come to my attention that both \r\n> the Go driver (\"lib/pq\") and the JDBC environment already send SNI \r\n> automatically. (In the case of JDBC this is done by the Java system \r\n> libraries, not the JDBC driver implementation.)\r\n\r\nFor the Go case it's only for sslmode=verify-full, and only because the\r\nGo standard library implementation does it for you automatically if you\r\nrequest the builtin server hostname validation. (I checked both lib/pq\r\nand its de facto replacement, jackc/pgx.) So it may not be something\r\nthat was done on purpose by the driver implementation.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 25 Feb 2021 18:36:22 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "Hate to be that guy but....\n\nThis still doesn't seem like it is IPv6-ready. Is there any harm in\nhaving SNI with an IPv6 address there if it gets through?\n\n\n", "msg_date": "Thu, 25 Feb 2021 21:40:03 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 26.02.21 03:40, Greg Stark wrote:\n> This still doesn't seem like it is IPv6-ready.\n\nDo you mean the IPv6 detection code is not correct? What is the problem?\n\n > Is there any harm in> having SNI with an IPv6 address there if it \ngets through?\n\nI doubt it.\n\n\n", "msg_date": "Fri, 26 Feb 2021 08:05:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> A customer asked about including Server Name Indication (SNI) into the SSL\n> connection from the client, so they can use an SSL-aware proxy to route\n> connections. There was a thread a few years ago where this was briefly\n> discussed but no patch appeared.[0] I whipped up a quick patch and it did\n> seem to do the job, so I figured I'd share it here.\n\nThis doesn't actually result in the ability to do that SSL connection\nproxying though, does it? At the least, whatever the proxy is would\nhave to be taught how to send back an 'S' to the client, and send an 'S'\nto the server chosen after the client sends along the TLS ClientHello w/\nSNI set, and then pass the traffic between afterwards.\n\nPerhaps it's worth doing this to allow proxy developers to do that, but\nthis isn't enough to make it actually work without the proxy actually\nknowing PG and being able to be configured to do the right thing for the\nPG protocol. I would think that, ideally, we'd have some proxy author\nwho would be willing to actually implement this and test that it all\nworks with this patch applied, and then make sure to explain that\nproxies will need to be adapted to be able to work. Simply including\nthis and then putting in the release notes that we now provide SNI as\npart of the SSL connection would likely lead people to believe that\nit'll 'just work'. Perhaps to manage expectations we'd want to say\nsomething like:\n\n- libpq will now include Server Name Indication as part of the\n PostgreSQL SSL startup process; proxies will need to understand the\n PostgreSQL protocol in order to be able to leverage this to perform\n routing.\n\nOr something along those lines, I would think. Of course, such a proxy\nwould need to also understand how to tell a client that, for example,\nGSSAPI encryption isn't available if a 'G' came first from the client,\nand what to do if a plaintext connection was requested.\n\n> The question I had was whether this should be an optional behavior, or\n> conversely a behavior that can be turned off, or whether it should just be\n> turned on all the time.\n\nCertainly seems like something that we should support turning off, at\nleast. As mentioned elsewhere, knowing the system that's being\nconnected to is certainly interesting to attackers.\n\nThanks,\n\nStephen", "msg_date": "Fri, 26 Feb 2021 12:55:16 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "> Do you mean the IPv6 detection code is not correct? What is the problem?\n\nThis bit, will recognize ipv4 addresses but not ipv6 addresses:\n\n+ /*\n+ * Set Server Name Indication (SNI), but not if it's a literal IP address.\n+ * (RFC 6066)\n+ */\n+ if (!(strspn(conn->pghost, \"0123456789.\") == strlen(conn->pghost) ||\n+ strchr(conn->pghost, ':')))\n+ {\n\n\n", "msg_date": "Fri, 26 Feb 2021 17:27:42 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 26.02.21 23:27, Greg Stark wrote:\n>> Do you mean the IPv6 detection code is not correct? What is the problem?\n> \n> This bit, will recognize ipv4 addresses but not ipv6 addresses:\n> \n> + /*\n> + * Set Server Name Indication (SNI), but not if it's a literal IP address.\n> + * (RFC 6066)\n> + */\n> + if (!(strspn(conn->pghost, \"0123456789.\") == strlen(conn->pghost) ||\n> + strchr(conn->pghost, ':')))\n> + {\n\nThe colon should recognize an IPv6 address, unless I'm not thinking \nstraight.\n\n\n", "msg_date": "Thu, 18 Mar 2021 09:31:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On Thu, Mar 18, 2021 at 9:31 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 26.02.21 23:27, Greg Stark wrote:\n> >> Do you mean the IPv6 detection code is not correct? What is the problem?\n> >\n> > This bit, will recognize ipv4 addresses but not ipv6 addresses:\n> >\n> > + /*\n> > + * Set Server Name Indication (SNI), but not if it's a literal IP address.\n> > + * (RFC 6066)\n> > + */\n> > + if (!(strspn(conn->pghost, \"0123456789.\") == strlen(conn->pghost) ||\n> > + strchr(conn->pghost, ':')))\n> > + {\n>\n> The colon should recognize an IPv6 address, unless I'm not thinking\n> straight.\n\nYeah, it should.\n\nOne could argue you should also check that it's got only valid ipv6\ncharacters in it, but since the colon isn't allowed in a hostname this\nshouldn't be a problem. (And we cannot have a <host>:<port> stored in\nconn->pghost).\n\nMy guess is Greg missed the second part of it that checks for a colon\n-- so maybe expand on that a bit in the comment, and on the fact that\nwe already know the port can't be part of it.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 18 Mar 2021 09:48:57 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 25.02.21 19:36, Jacob Champion wrote:\n> On Thu, 2021-02-25 at 17:00 +0100, Peter Eisentraut wrote:\n>> Just as additional data points, it has come to my attention that both\n>> the Go driver (\"lib/pq\") and the JDBC environment already send SNI\n>> automatically. (In the case of JDBC this is done by the Java system\n>> libraries, not the JDBC driver implementation.)\n> \n> For the Go case it's only for sslmode=verify-full, and only because the\n> Go standard library implementation does it for you automatically if you\n> request the builtin server hostname validation. (I checked both lib/pq\n> and its de facto replacement, jackc/pgx.) So it may not be something\n> that was done on purpose by the driver implementation.\n\nHere is a new patch with an option to turn it off, and some \ndocumentation added.", "msg_date": "Thu, 18 Mar 2021 12:27:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 18.03.21 12:27, Peter Eisentraut wrote:\n> On 25.02.21 19:36, Jacob Champion wrote:\n>> On Thu, 2021-02-25 at 17:00 +0100, Peter Eisentraut wrote:\n>>> Just as additional data points, it has come to my attention that both\n>>> the Go driver (\"lib/pq\") and the JDBC environment already send SNI\n>>> automatically.  (In the case of JDBC this is done by the Java system\n>>> libraries, not the JDBC driver implementation.)\n>>\n>> For the Go case it's only for sslmode=verify-full, and only because the\n>> Go standard library implementation does it for you automatically if you\n>> request the builtin server hostname validation. (I checked both lib/pq\n>> and its de facto replacement, jackc/pgx.) So it may not be something\n>> that was done on purpose by the driver implementation.\n> \n> Here is a new patch with an option to turn it off, and some \n> documentation added.\n\nCommitted like that. (Default to on, but it's easy to change if there \nare any further thoughts.)\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 15:32:55 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On Wed, 2021-04-07 at 15:32 +0200, Peter Eisentraut wrote:\r\n> Committed like that. (Default to on, but it's easy to change if there \r\n> are any further thoughts.)\r\n\r\nHi Peter,\r\n\r\nIt looks like this code needs some guards for a NULL conn->pghost. For example when running\r\n\r\n psql 'dbname=postgres sslmode=require hostaddr=127.0.0.1'\r\nwith no PGHOST in the environment, psql is currently segfaulting for\r\nme.\r\n\r\n--Jacob\r\n", "msg_date": "Thu, 3 Jun 2021 17:25:24 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> It looks like this code needs some guards for a NULL conn->pghost. For example when running\n> psql 'dbname=postgres sslmode=require hostaddr=127.0.0.1'\n> with no PGHOST in the environment, psql is currently segfaulting for\n> me.\n\nDuplicated here:\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00007f3adec47ec3 in __strspn_sse42 () from /lib64/libc.so.6\n(gdb) bt\n#0 0x00007f3adec47ec3 in __strspn_sse42 () from /lib64/libc.so.6\n#1 0x00007f3adf6b7026 in initialize_SSL (conn=0xed4160)\n at fe-secure-openssl.c:1090\n#2 0x00007f3adf6b8755 in pgtls_open_client (conn=conn@entry=0xed4160)\n at fe-secure-openssl.c:132\n#3 0x00007f3adf6b3955 in pqsecure_open_client (conn=conn@entry=0xed4160)\n at fe-secure.c:180\n#4 0x00007f3adf6a4808 in PQconnectPoll (conn=conn@entry=0xed4160)\n at fe-connect.c:3102\n#5 0x00007f3adf6a5b31 in connectDBComplete (conn=conn@entry=0xed4160)\n at fe-connect.c:2219\n#6 0x00007f3adf6a8968 in PQconnectdbParams (keywords=keywords@entry=0xed40c0, \n values=values@entry=0xed4110, expand_dbname=expand_dbname@entry=1)\n at fe-connect.c:669\n#7 0x0000000000404db2 in main (argc=<optimized out>, argv=0x7ffc58477208)\n at startup.c:266\n\nYou don't seem to need the \"sslmode=require\" either, just an\nSSL-enabled server.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Jun 2021 13:41:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "I wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n>> It looks like this code needs some guards for a NULL conn->pghost. For example when running\n>> psql 'dbname=postgres sslmode=require hostaddr=127.0.0.1'\n>> with no PGHOST in the environment, psql is currently segfaulting for\n>> me.\n\n> Duplicated here:\n\nIt looks like the immediate problem can be resolved by just adding\na check for conn->pghost not being NULL, since the comment above\nsays\n\n * Per RFC 6066, do not set it if the host is a literal IP address (IPv4\n * or IPv6).\n\nand having only hostaddr certainly fits that case. But I didn't\ncheck to see if any more problems arise later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Jun 2021 13:52:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "I wrote:\n> It looks like the immediate problem can be resolved by just adding\n> a check for conn->pghost not being NULL,\n\n... scratch that. There's another problem here, which is that this\ncode should not be looking at conn->pghost AT ALL. That will do the\nwrong thing with a multi-element host list. The right thing to be\nlooking at is conn->connhost[conn->whichhost].host --- with a test\nto make sure it's not NULL or an empty string. (I didn't stop to\nstudy this code close enough to see if it'll ignore an empty\nstring without help.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Jun 2021 14:14:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 03.06.21 20:14, Tom Lane wrote:\n> I wrote:\n>> It looks like the immediate problem can be resolved by just adding\n>> a check for conn->pghost not being NULL,\n> \n> ... scratch that. There's another problem here, which is that this\n> code should not be looking at conn->pghost AT ALL. That will do the\n> wrong thing with a multi-element host list. The right thing to be\n> looking at is conn->connhost[conn->whichhost].host --- with a test\n> to make sure it's not NULL or an empty string. (I didn't stop to\n> study this code close enough to see if it'll ignore an empty\n> string without help.)\n\nPatch attached. Empty host string was handled implicitly by the IP \ndetection expression, but I added an explicit check for sanity. (I \nwasn't actually able to get an empty string to this point, but it's \nclearly better to be prepared for it.)", "msg_date": "Mon, 7 Jun 2021 11:54:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Patch attached. Empty host string was handled implicitly by the IP \n> detection expression, but I added an explicit check for sanity. (I \n> wasn't actually able to get an empty string to this point, but it's \n> clearly better to be prepared for it.)\n\nYeah, I'd include the empty-string test just because it's standard\npractice in this area of libpq. Whether those tests are actually\ntriggerable in every case is obscure, but ...\n\nPatch looks sane by eyeball, though I didn't test it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Jun 2021 11:34:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On Mon, Jun 07, 2021 at 11:34:24AM -0400, Tom Lane wrote:\n> Yeah, I'd include the empty-string test just because it's standard\n> practice in this area of libpq. Whether those tests are actually\n> triggerable in every case is obscure, but ...\n\nChecking after a NULL string and an empty one is more libpq-ish.\n\n> Patch looks sane by eyeball, though I didn't test it.\n\nI did, and I could not break it.\n\n+ SSLerrfree(err);\n+ SSL_CTX_free(SSL_context);\n+ return -1;\nIt seems to me that there is no need to free SSL_context if\nSSL_set_tlsext_host_name() fails here, except if you'd like to move\nthe check for the SNI above SSL_CTX_free() around L1082. There is no\nharm as SSL_CTX_free() is a no-op on NULL input.\n--\nMichael", "msg_date": "Tue, 8 Jun 2021 15:54:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: SSL SNI" }, { "msg_contents": "On 08.06.21 08:54, Michael Paquier wrote:\n> On Mon, Jun 07, 2021 at 11:34:24AM -0400, Tom Lane wrote:\n>> Yeah, I'd include the empty-string test just because it's standard\n>> practice in this area of libpq. Whether those tests are actually\n>> triggerable in every case is obscure, but ...\n> \n> Checking after a NULL string and an empty one is more libpq-ish.\n> \n>> Patch looks sane by eyeball, though I didn't test it.\n> \n> I did, and I could not break it.\n> \n> + SSLerrfree(err);\n> + SSL_CTX_free(SSL_context);\n> + return -1;\n> It seems to me that there is no need to free SSL_context if\n> SSL_set_tlsext_host_name() fails here, except if you'd like to move\n> the check for the SNI above SSL_CTX_free() around L1082. There is no\n> harm as SSL_CTX_free() is a no-op on NULL input.\n\nGood point. Committed that way.\n\n\n", "msg_date": "Tue, 8 Jun 2021 16:12:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: SSL SNI" } ]
[ { "msg_contents": "Not sure that previous email was sent correctly. If it was duplicated,\nsorry for the inconvenience.\n\nHi, hackers,\n\nI have one question related to returned information in the row description\nfor prepared statement.\n\nFor example Select $1 * 2 and then Bind 1.6 to it.\nThe returned result is correct and equal to 3.2, but type modifier in the\nrow description is equal to -1, which is not correct.\n\nDoes someone know where this modifier is calculated? Is this a bug or\nintention behavior?\n\nBest regards,\nAleksei Ivanov\n\nNot sure that previous email was sent correctly. If it was duplicated, sorry for the inconvenience.Hi, hackers,I have one question related to returned information in the row description for prepared statement.For example Select $1 * 2 and then Bind 1.6 to it.The returned result is correct and equal to 3.2, but type modifier in the row description is equal to -1, which is not correct.Does someone know where this modifier is calculated? Is this a bug or intention behavior?Best regards,Aleksei Ivanov", "msg_date": "Mon, 15 Feb 2021 17:25:55 -0800", "msg_from": "Aleksei Ivanov <iv.alekseii@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: Row description Metadata information" }, { "msg_contents": "On Mon, Feb 15, 2021 at 05:25:55PM -0800, Aleksei Ivanov wrote:\n> Not sure that previous email was sent correctly. If it was duplicated, sorry\n> for the inconvenience.\n> \n> Hi, hackers,\n> \n> I have one question related to returned information in the row description for\n> prepared statement.\n> \n> For example Select $1 * 2 and then Bind 1.6 to it.\n> The returned result is correct and equal to 3.2, but type modifier in the row\n> description is equal to -1, which is not correct.\n> \n> Does someone know where this modifier is calculated? Is this a bug or intention\n> behavior?\n\nPostgres can't always propogate the type modifier for all expresions, so\nit basically doesn't even try. For example, the modifier for || would\nbe very complex.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 9 Mar 2021 13:32:27 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: Row description Metadata information" } ]
[ { "msg_contents": "Hi all\n\nThis morning for the the umpteenth time I saw:\n\n some error message: [blank here]\n\noutput from a libpq program.\n\nThat's because passing a NULL PGresult to PQgetResultErrorMessage() returns\n\"\". But a NULL PGresult is a normal result from PQexec when it fails to\nsubmit a query due to an invalid connection, when PQgetResult can't get a\nresult from an invalid connection, etc.\n\nE.g. this pattern:\n\n res = PQexec(conn, \"something\");\n\n if (PQstatus(res) != PGRES_TUPLES_OK)\n {\n report_an_error(\"some error message: %s\",\n PQresultErrorMessage(res));\n }\n\n... where \"res\" is NULL because the connection was invalid, so\nPQresultErrorMessage(res) emits the empty string.\n\nAs a result, using PQerrorMessage(conn) is actually better in most cases,\ndespite the static error buffer issues. It'll do the right thing when the\nconnection itself is bad. Alternately you land up with the pattern\n\n res == NULL ? PQerrorMessage(conn) : PQresultErrorMessage(res)\n\nI'm not quite sure what to do about this. Ideally PQresultErrorMessage()\nwould take the PGconn* too, but it doesn't, and it's not too friendly to\nadd an extra argument. Plus arguably they mean different things.\n\nMaybe it's as simple as changing the docs to say that you should prefer\nPQerrorMessage() if you aren't using multiple PGresult s at a time, and\nmentioning the need to copy the error string.\n\nBut I'd kind of like to instead return a new non-null PGresult\nPGRES_RESULT_ERROR whenever we'd currently return a NULL PGresult due to a\nfailure. Embed the error message into the PGresult, so\nPQresultErrorMessage() can fetch it. Because a PGresult needs to be owned\nby a PGconn and a NULL PGconn can't own anything, we'd instead return a\npointer to a static const global PGresult with value PGRES_NO_CONNECTION if\nany function is passed a NULL PGconn*. That way it doesn't matter if it\ngets PQclear()ed or not. And PQclear() could test for (res ==\nPGresult_no_connection) and not try to free it if found.\n\nThe main issue I see there is that existing code may expect a NULL PGresult\nand may test for (res == NULL) as an indicator of a query-sending failure\nfrom PQexec etc. So I suspect we'd need a libpq-global option to enable\nthis behaviour for apps that are aware of it - we wouldn't want to add new\nfunction signature variants after all.\n\nSimilar changes would make sense for returning NULL when there are no\nresult sets remaining after a PQsendQuery, and for returning NULL after\nrow-by-row fetch mode runs out of rows.\n\nHi allThis morning for the the umpteenth time I saw: some error message: [blank here]output from a libpq program.That's because passing a NULL PGresult to PQgetResultErrorMessage() returns \"\". But a NULL PGresult is a normal result from PQexec when it fails to submit a query due to an invalid connection, when PQgetResult can't get a result from an invalid connection, etc.E.g. this pattern:    res = PQexec(conn, \"something\");        if (PQstatus(res) != PGRES_TUPLES_OK)    {       report_an_error(\"some error message: %s\",                 PQresultErrorMessage(res));    }... where \"res\" is NULL because the connection was invalid, so PQresultErrorMessage(res) emits the empty string.As a result, using PQerrorMessage(conn) is actually better in most cases, despite the static error buffer issues. It'll do the right thing when the connection itself is bad. Alternately you land up with the pattern     res == NULL ? PQerrorMessage(conn) : PQresultErrorMessage(res)I'm not quite sure what to do about this. Ideally PQresultErrorMessage() would take the PGconn* too, but it doesn't, and it's not too friendly to add an extra argument. Plus arguably they mean different things.Maybe it's as simple as changing the docs to say that you should prefer PQerrorMessage() if you aren't using multiple PGresult s at a time, and mentioning the need to copy the error string. But I'd kind of like to instead return a new non-null PGresult PGRES_RESULT_ERROR whenever we'd currently return a NULL PGresult due to a failure. Embed the error message into the PGresult, so PQresultErrorMessage() can fetch it. Because a PGresult needs to be owned by a PGconn and a NULL PGconn can't own anything, we'd instead return a pointer to a static const global PGresult with value PGRES_NO_CONNECTION if any function is passed a NULL PGconn*. That way it doesn't matter if it gets PQclear()ed or not. And PQclear() could test for (res == PGresult_no_connection) and not try to free it if found.The main issue I see there is that existing code may expect a NULL PGresult and may test for (res == NULL) as an indicator of a query-sending failure from PQexec etc. So I suspect we'd need a libpq-global option to enable this behaviour for apps that are aware of it - we wouldn't want to add new function signature variants after all.Similar changes would make sense for returning NULL when there are no result sets remaining after a PQsendQuery, and for returning NULL after row-by-row fetch mode runs out of rows.", "msg_date": "Tue, 16 Feb 2021 14:29:30 +0800", "msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>", "msg_from_op": true, "msg_subject": "libpq PQresultErrorMessage vs PQerrorMessage API issue" }, { "msg_contents": "Craig Ringer <craig.ringer@enterprisedb.com> writes:\n> This morning for the the umpteenth time I saw:\n> some error message: [blank here]\n> output from a libpq program.\n\n> That's because passing a NULL PGresult to PQgetResultErrorMessage() returns\n> \"\". But a NULL PGresult is a normal result from PQexec when it fails to\n> submit a query due to an invalid connection, when PQgetResult can't get a\n> result from an invalid connection, etc.\n\nHow much of this is due to programmers not bothering to check whether\nPQconnectXXX succeeded? I do not think we need to go far out of our\nway to cope with that scenario.\n\nThe idea of having a static PGresult that we can hand back to denote\nout-of-memory scenarios is kind of cute. But again, I wonder how\noften the situation comes up in the real world. It might be worth\ndoing just to have a more consistent API spec, though. Particularly\nfor PQgetResult, where a NULL result has a defined, non-error meaning.\n\nIn general I doubt there's enough of a problem here to justify\ninventing new or different APIs. But if we can sand down some\nrough edges without doing that, let's have a look.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Feb 2021 10:15:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq PQresultErrorMessage vs PQerrorMessage API issue" } ]
[ { "msg_contents": "It looks like we missed this in a6642b3ae.\n\nI think it's an odd behavior of pg_stat_progress_create_index to simultaneously\nshow the global progress as well as the progress for the current partition ...\n\nIt seems like for partitioned reindex, reindex_index() should set the AM, which\nis used in the view:\n\nsrc/backend/catalog/system_views.sql- WHEN 2 THEN 'building index' ||\nsrc/backend/catalog/system_views.sql: COALESCE((': ' || pg_indexam_progress_phasename(S.param9::oid, S.param11)),\n\nMaybe it needs a new flag, like:\nparams->options & REINDEXOPT_REPORT_PROGRESS_AM\n\nI don't understand why e66bcfb4c added multiple calls to\npgstat_progress_start_command().\n\n-- \nJustin", "msg_date": "Tue, 16 Feb 2021 00:42:14 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "progress reporting for partitioned REINDEX" }, { "msg_contents": "On Tue, 16 Feb 2021, 07:42 Justin Pryzby, <pryzby@telsasoft.com> wrote:\n>\n> It looks like we missed this in a6642b3ae.\n>\n> I think it's an odd behavior of pg_stat_progress_create_index to simultaneously\n> show the global progress as well as the progress for the current partition ...\n>\n> It seems like for partitioned reindex, reindex_index() should set the AM, which\n> is used in the view:\n>\n> src/backend/catalog/system_views.sql- WHEN 2 THEN 'building index' ||\n> src/backend/catalog/system_views.sql: COALESCE((': ' || pg_indexam_progress_phasename(S.param9::oid, S.param11)),\n>\n> Maybe it needs a new flag, like:\n> params->options & REINDEXOPT_REPORT_PROGRESS_AM\n>\n> I don't understand why e66bcfb4c added multiple calls to\n> pgstat_progress_start_command().\n\n\nThese were added to report the index and table that are currently\nbeing worked on in concurrent reindexes of tables, schemas and\ndatabases. Before that commit, it would only report up to the last\nindex being prepared in phase 1, leaving the user with no info on\nwhich index is being rebuilt.\n\nWhy pgstat_progress_start_command specifically was chosen? That is\nbecause there is no method to update the\nbeentry->st_progress_command_target other than through\nstat_progress_start_command, and according to the docs that field\nshould contain the tableId of the index that is currently being worked\non. This field needs a pgstat_progress_start_command because CIC / RiC\nreindexes all indexes concurrently at the same time (and not grouped\nby e.g. table), so we must re-start reporting for each index in each\nnew phase in which we report data to get the heapId reported correctly\nfor that index.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 16 Feb 2021 12:39:08 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Tue, Feb 16, 2021 at 12:39:08PM +0100, Matthias van de Meent wrote:\n> These were added to report the index and table that are currently\n> being worked on in concurrent reindexes of tables, schemas and\n> databases. Before that commit, it would only report up to the last\n> index being prepared in phase 1, leaving the user with no info on\n> which index is being rebuilt.\n\nNothing much to add on top of what Matthias is saying here. REINDEX\nfor partitioned relations builds first the full list of partitions to\nwork on, and then processes each one of them in a separate\ntransaction. This is consistent with what we do for other commands\nthat need to handle an object different than a non-partitioned table\nor a non-partitioned index. The progress reporting has to report the\nindex whose storage is manipulated and its parent table.\n\n> Why pgstat_progress_start_command specifically was chosen? That is\n> because there is no method to update the\n> beentry->st_progress_command_target other than through\n> stat_progress_start_command, and according to the docs that field\n> should contain the tableId of the index that is currently being worked\n> on. This field needs a pgstat_progress_start_command because CIC / RiC\n> reindexes all indexes concurrently at the same time (and not grouped\n> by e.g. table), so we must re-start reporting for each index in each\n> new phase in which we report data to get the heapId reported correctly\n> for that index.\n\nUsing pgstat_progress_start_command() for this purpose is fine IMO.\nThis provides enough information for the user without complicating\nmore this API layer.\n\n- if (progress)\n- pgstat_progress_update_param(PROGRESS_CREATEIDX_ACCESS_METHOD_OID,\n- iRel->rd_rel->relam);\n+ // Do this unconditionally?\n+ pgstat_progress_update_param(PROGRESS_CREATEIDX_ACCESS_METHOD_OID,\n+ iRel->rd_rel->relam);\nYou cannot do that, this would clobber the progress information of any\nupper layer that already reports something to the progress infra in\nthe backend's MyProc. CLUSTER is one example calling reindex_relation()\nthat does *not* want progress reporting to happen in REINDEX. \n\n+ /* progress reporting for partitioned indexes */\n+ if (relkind == RELKIND_PARTITIONED_INDEX)\n+ {\n+ const int progress_index[3] = {\n+ PROGRESS_CREATEIDX_COMMAND,\n+ PROGRESS_CREATEIDX_INDEX_OID,\n+ PROGRESS_CREATEIDX_PARTITIONS_TOTAL\n+ };\nThis does not make sense in ReindexPartitions() IMO because this\nrelation is not reindexed as it has no storage, and you would lose the\ncontext of each partition.\n\nSomething that we may want to study instead is whether we'd like to\nreport to the user the set of relations a REINDEX command is working\non and on which relation the work is currently done. But I am not\nreally sure that we need that as long a we have a VERBOSE option that\nlets us know via the logs what already happened in a single command.\n\nI see no bug here.\n--\nMichael", "msg_date": "Wed, 17 Feb 2021 14:55:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Wed, Feb 17, 2021 at 02:55:04PM +0900, Michael Paquier wrote:\n> I see no bug here.\n\npg_stat_progress_create_index includes partitions_{done,total} for\nCREATE INDEX p, so isn't it strange if it wouldn't do likewise for\nREINDEX INDEX p ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Feb 2021 00:10:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Wed, Feb 17, 2021 at 12:10:43AM -0600, Justin Pryzby wrote:\n> On Wed, Feb 17, 2021 at 02:55:04PM +0900, Michael Paquier wrote:\n>> I see no bug here.\n> \n> pg_stat_progress_create_index includes partitions_{done,total} for\n> CREATE INDEX p, so isn't it strange if it wouldn't do likewise for\n> REINDEX INDEX p ?\n\nThere is always room for improvement. This stuff applies now only\nwhen creating an index in the non-concurrent case because an index\ncannot be created on a partitioned table concurrently, and this\nbehavior is documented as such. If we are going to improve this area,\nit seems to me that we may want to consider more cases than just the\ncase of partitions, as it could also help the monitoring of REINDEX on\nschemas and databases.\n\nI don't think that this fits as an open item. That's just a different\nfeature.\n--\nMichael", "msg_date": "Wed, 17 Feb 2021 15:36:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Wed, Feb 17, 2021 at 03:36:20PM +0900, Michael Paquier wrote:\n> On Wed, Feb 17, 2021 at 12:10:43AM -0600, Justin Pryzby wrote:\n> > On Wed, Feb 17, 2021 at 02:55:04PM +0900, Michael Paquier wrote:\n> >> I see no bug here.\n> > \n> > pg_stat_progress_create_index includes partitions_{done,total} for\n> > CREATE INDEX p, so isn't it strange if it wouldn't do likewise for\n> > REINDEX INDEX p ?\n> \n> There is always room for improvement. This stuff applies now only\n> when creating an index in the non-concurrent case because an index\n> cannot be created on a partitioned table concurrently, and this\n> behavior is documented as such. If we are going to improve this area,\n> it seems to me that we may want to consider more cases than just the\n> case of partitions, as it could also help the monitoring of REINDEX on\n> schemas and databases.\n> \n> I don't think that this fits as an open item. That's just a different\n> feature.\n\nI see it as an omission in the existing feature.\n\nSince v13, pg_stat_progress_create_index does progress reports for CREATE INDEX\n(partitioned and nonpartitioned), and REINDEX of nonpartitioned tables.\n\nWhen we implemented REINDEX of partitioned tables, it should've handled\nprogress reporting in the fields where that's reported for CREATE INDEX.\nOr else we should document that \"partitions_total/done are not populated for\nREINDEX of a partitioned table as they are for CREATE INDEX\".\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 17 Feb 2021 10:24:37 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Wed, Feb 17, 2021 at 10:24:37AM -0600, Justin Pryzby wrote:\n> When we implemented REINDEX of partitioned tables, it should've handled\n> progress reporting in the fields where that's reported for CREATE INDEX.\n> Or else we should document that \"partitions_total/done are not populated for\n> REINDEX of a partitioned table as they are for CREATE INDEX\".\n\nCREATE INDEX and REINDEX are two completely separate commands, with\nseparate code paths, and mostly separate logics. When it comes to\nREINDEX, the information that is currently showed to the user is not\nincorrect, but in line with what the progress reporting of ~13 is able\nto do: each index gets reported with its parent table one-by-one,\ndepending on if CONCURRENTLY is used or not, in consistency with what\nReindexMultipleTables() does for all the REINDEX commands working on\nmultiple objects, processing in one transaction each object listed\npreviously.\n\nNow, coming back to the ask, I think that if we want to provide some\ninformation in the REINDEX with the list of relations to work on, we\nare going to need more fields than what we have now, to report:\n1) The total number of indexes on which REINDEX is working on for the\ncurrent relation worked on.\n2) The n-th index being worked on by REINDEX, as of the number of\nindexes in 1).\n3) The total number of relations a given command is working on, aka\nthe number of tables REINDEX SCHEMA, DATABASE, SYSTEM or REINDEX on a\npartitioned relation has accumulated.\n4) The n-th relation listed in 3) currently worked on.\n\nThe current columns partitions_total and partitions_done are partially\nable to fill in the roles of 3) and 4), if we'd rename those columns\nto relations_done and relations_total, still they could also mean 1)\nand 2) in some contexts, like the number of indexes worked on for a\nsingle relation. So the problem is more complex than you make it\nsound, and needs to consider a certain number of cases to be\nconsistent across all the REINDEX commands that exist. In short, this\nis not only a problem related to partitioned tables.\n\nI have no issues with documenting more precisely on which commands\npartitions_total and partitions_done apply currently, by citing the\ncommands where these are effective. We do that for index_relid for\ninstance.\n--\nMichael", "msg_date": "Thu, 18 Feb 2021 14:17:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Thu, Feb 18, 2021 at 02:17:00PM +0900, Michael Paquier wrote:\n> I have no issues with documenting more precisely on which commands\n> partitions_total and partitions_done apply currently, by citing the\n> commands where these are effective. We do that for index_relid for\n> instance.\n\nPlease find attached a patch to do that. Justin, what do you think?\n--\nMichael", "msg_date": "Fri, 19 Feb 2021 15:06:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Fri, Feb 19, 2021 at 03:06:04PM +0900, Michael Paquier wrote:\n> On Thu, Feb 18, 2021 at 02:17:00PM +0900, Michael Paquier wrote:\n> > I have no issues with documenting more precisely on which commands\n> > partitions_total and partitions_done apply currently, by citing the\n> > commands where these are effective. We do that for index_relid for\n> > instance.\n> \n> Please find attached a patch to do that. Justin, what do you think?\n\nLooks fine.\n\nI removed this from opened items.\n\nAlso, I noticed that vacuum recurses into partition heirarchies since v10, but\npg_stat_progress_vacuum also doesn't show anything about the parent table or\nthe progress of recursing through the hierarchy.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 19 Feb 2021 00:12:54 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Fri, Feb 19, 2021 at 12:12:54AM -0600, Justin Pryzby wrote:\n> Looks fine.\n\nThanks, applied then to clarify things.\n\n> Also, I noticed that vacuum recurses into partition heirarchies since v10, but\n> pg_stat_progress_vacuum also doesn't show anything about the parent table or\n> the progress of recursing through the hierarchy.\n\nYeah, that's an area where it would be possible to improve the\nmonitoring, for both autovacuums and manual VACUUMs.\n--\nMichael", "msg_date": "Sat, 20 Feb 2021 10:37:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: progress reporting for partitioned REINDEX" }, { "msg_contents": "On Sat, Feb 20, 2021 at 10:37:08AM +0900, Michael Paquier wrote:\n> > Also, I noticed that vacuum recurses into partition heirarchies since v10, but\n> > pg_stat_progress_vacuum also doesn't show anything about the parent table or\n> > the progress of recursing through the hierarchy.\n> \n> Yeah, that's an area where it would be possible to improve the\n> monitoring, for both autovacuums and manual VACUUMs.\n\nI was thinking that instead of reporting partitions_done/partitions_total in\nthe individual progress views, maybe the progress across partitions should be\nreported in a separate pg_stat_progress_partitioned. This would apply to my\nCLUSTER patch as well as VACUUM. I haven't though about the implementation,\nthough.\n\nIf the partitions_done/total were *removed* from the create_index view, that\nwould resolve the odd behavior that a single row simultanously shows 1) the\noverall progress of the operation across partitions; and, 2) the detailed\ninformation about the status of the operation on the current leaf partition.\n\nHowever I guess it's not general enough to support progress reports of\nexecution of planned (not utility) statements.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 19 Feb 2021 20:40:11 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: progress reporting for partitioned REINDEX" } ]
[ { "msg_contents": "Hello,\n\nIf I invoked a wrong ALTER TABLE command like this, I would see an\nunexpected error.\n\n=# ALTER TABLE <foreign table> ATTACH PARTITION ....\nERROR: \"ft1\" is of the wrong type\n\nThe cause is that ATWrongRelkidError doesn't handle ATT_TABLE |\nATT_ATT_PARTITIONED_INDEX.\n\nAfter checking all callers of ATSimplePermissions, I found that;\n\nThe two below are no longer used.\n\n ATT_TABLE | ATT_VIEW\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX\n\nThe four below are not handled.\n\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX:\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX | ATT_FOREIGN_TABLE\n ATT_TABLE | ATT_PARTITIONED_INDEX:\n ATT_INDEX:\n\nThe attached is just fixing that. I tried to make it generic but\ndidn't find a clean and translatable way.\n\nAlso I found that only three cases in the function are excecised by\nmake check.\n\nATT_TABLE : foreign_data, indexing checks \nATT_TABLE | ATT_FOREIGN_TABLE : alter_table\nATT_TABLE | ATT_COMPOSITE_TYPE | ATT_FOREIGN_TABLE : alter_table\n\nI'm not sure it's worth the trouble so the attached doesn't do\nanything for that.\n\n\nVersions back to PG11 have similar but different mistakes.\n\nPG11, 12:\n the two below are not used\n ATT_TABLE | ATT_VIEW\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX\n\n the two below are not handled\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX\n ATT_TABLE | ATT_PARTITIONED_INDEX\n\nPG13:\n the two below are not used\n ATT_TABLE | ATT_VIEW\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX\n\n the three below are not handled\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX\n ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX | ATT_FOREIGN_TABLE\n ATT_TABLE | ATT_PARTITIONED_INDEX\n\nPG10:\n ATT_TABLE | ATT_VIEW is not used\n (all values are handled)\n\nSo the attached are the patches for PG11, 12, 13 and master.\n\nIt seems that the case lines in the function are intended to be in the\nATT_*'s definition order, but some of the them are out of that\norder. However, I didn't reorder existing lines in the attached. I\ndidn't check the value itself is correct for the callers.\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 16 Feb 2021 18:14:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "On Tue, Feb 16, 2021 at 06:14:15PM +0900, Kyotaro Horiguchi wrote:\n> The attached is just fixing that. I tried to make it generic but\n> didn't find a clean and translatable way.\n> \n> Also I found that only three cases in the function are excecised by\n> make check.\n> \n> ATT_TABLE : foreign_data, indexing checks \n> ATT_TABLE | ATT_FOREIGN_TABLE : alter_table\n> ATT_TABLE | ATT_COMPOSITE_TYPE | ATT_FOREIGN_TABLE : alter_table\n> \n> I'm not sure it's worth the trouble so the attached doesn't do\n> anything for that.\n\nEach sentence needs to be completely separate, as the language\ntranslated to may tweak the punctuation of the set of objects listed,\nat least. But you know that already :)\n\nIf you have seen cases where permission checks show up messages with\nan incorrect relkind mentioned, could you add some regression tests\nable to trigger the problematic cases you saw and to improve this\ncoverage?\n--\nMichael", "msg_date": "Thu, 18 Feb 2021 16:27:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Thu, 18 Feb 2021 16:27:23 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Feb 16, 2021 at 06:14:15PM +0900, Kyotaro Horiguchi wrote:\n> > The attached is just fixing that. I tried to make it generic but\n> > didn't find a clean and translatable way.\n> > \n> > Also I found that only three cases in the function are excecised by\n> > make check.\n> > \n> > ATT_TABLE : foreign_data, indexing checks \n> > ATT_TABLE | ATT_FOREIGN_TABLE : alter_table\n> > ATT_TABLE | ATT_COMPOSITE_TYPE | ATT_FOREIGN_TABLE : alter_table\n> > \n> > I'm not sure it's worth the trouble so the attached doesn't do\n> > anything for that.\n> \n> Each sentence needs to be completely separate, as the language\n> translated to may tweak the punctuation of the set of objects listed,\n> at least. But you know that already :)\n\nYeah, I strongly feel that:p As you pointed, the puctuations and the\narticle (for index and others) was that.\n\n> If you have seen cases where permission checks show up messages with\n> an incorrect relkind mentioned, could you add some regression tests\n> able to trigger the problematic cases you saw and to improve this\n> coverage?\n\nI can add some regression tests to cover all the live cases. That\ncould reveal no-longer-used combinations.\n\nI'll do that. Thaks for the suggestion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 18 Feb 2021 17:17:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Thu, 18 Feb 2021 17:17:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I can add some regression tests to cover all the live cases. That\n> could reveal no-longer-used combinations.\n\nThe attached is that.\n\nATT_VIEW is used for \"CREATE OR REPLACE view\" and checked against\nearlier in DefineVirtualRelation. But we can add a test to make sure\nthat is checked anywhere.\n\nAll other values can be exercised.\n\nATT_TABLE | ATT_MATVIEW\nATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX\nATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX |\n ATT_FOREIGN_TABLE\nATT_TABLE | ATT_MATVIEW | ATT_FOREIGN_TABLE\nATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_FOREIGN_TABLE\nATT_TABLE | ATT_PARTITIONED_INDEX\nATT_TABLE | ATT_VIEW | ATT_MATVIEW | ATT_INDEX\nATT_TABLE | ATT_VIEW | ATT_FOREIGN_TABLE:\nATT_FOREIGN_TABLE\n\nThese are provoked by the following commands respectively:\n\n ALTER TABLE <view> CLUSTER ON\n ALTER TABLE <view> SET TABLESPACE\n ALTER TABLE <view> ALTER COLUMN <col> SET STATISTICS\n ALTER TABLE <view> ALTER COLUMN <col> SET STORGE\n ALTER TABLE <view> ALTER COLUMN <col> SET()\n ALTER TABLE <view> ATTACH PARTITION\n ALTER TABLE/INDEX <partidx> SET/RESET\n ALTER TABLE <matview> ALTER <col> SET DEFAULT\n ALTER TABLE/INDEX <pidx> ALTER COLLATION ..REFRESH VERSION\n ALTER TABLE <view> OPTIONS ()\n\nThe following three errors are already excised.\n\nATT_TABLE\nATT_TABLE | ATT_FOREIGN_TABLE\nATT_TABLE | ATT_COMPOSITE_TYPE | ATT_FOREIGN_TABLE:\n\n\nBy the way, I find this as somewhat mystifying. I'm not sure it worth\nfixing though..\n\nALTER MATERIALIZED VIEW mv1 ALTER COLUMN a SET DEFAULT 1;\nERROR: \"mv1\" is not a table, view, or foreign table\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 19 Feb 2021 17:30:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "Hi Horiguchi-san,\n\nOn Fri, Feb 19, 2021 at 05:30:39PM +0900, Kyotaro Horiguchi wrote:\n> The attached is that.\n> \n> ATT_VIEW is used for \"CREATE OR REPLACE view\" and checked against\n> earlier in DefineVirtualRelation. But we can add a test to make sure\n> that is checked anywhere.\n\nMy apologies for not coming back to this thread earlier. I have this\nthread in my backlog for some time now but I was not able to come back\nto it. That's too late for v14 but it could be possible to do\nsomething for v15. Could you add this patch to the next commit fest?\nThat's fine to add my name as reviewer.\n\nThanks,\n--\nMichael", "msg_date": "Thu, 22 Apr 2021 13:48:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Thu, 22 Apr 2021 13:48:45 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi Horiguchi-san,\n> \n> On Fri, Feb 19, 2021 at 05:30:39PM +0900, Kyotaro Horiguchi wrote:\n> > The attached is that.\n> > \n> > ATT_VIEW is used for \"CREATE OR REPLACE view\" and checked against\n> > earlier in DefineVirtualRelation. But we can add a test to make sure\n> > that is checked anywhere.\n> \n> My apologies for not coming back to this thread earlier. I have this\n> thread in my backlog for some time now but I was not able to come back\n> to it. That's too late for v14 but it could be possible to do\n> something for v15. Could you add this patch to the next commit fest?\n> That's fine to add my name as reviewer.\n\nThank you for kindly telling me that, but please don't worry.\n\nI'll add it to the next CF, with specifying you as a reviewer as you\ntold.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:00:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI have tested it with various object types and getting a meaningful error.", "msg_date": "Tue, 29 Jun 2021 20:13:14 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Tue, 29 Jun 2021 20:13:14 +0000, ahsan hadi <ahsan.hadi@gmail.com> wrote in \n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> I have tested it with various object types and getting a meaningful error.\n\nThanks for looking this, Ahsan.\n\nHowever, Peter-E is proposing a change at a fundamental level, which\nlooks more promising (disregarding backpatch burden).\n\nhttps://www.postgresql.org/message-id/01d4fd55-d4fe-5afc-446c-a7f99e043f3d@enterprisedb.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 30 Jun 2021 09:55:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "On Wed, Jun 30, 2021 at 5:56 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 29 Jun 2021 20:13:14 +0000, ahsan hadi <ahsan.hadi@gmail.com>\n> wrote in\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: tested, passed\n> > Documentation: not tested\n> >\n> > I have tested it with various object types and getting a meaningful\n> error.\n>\n> Thanks for looking this, Ahsan.\n>\n> However, Peter-E is proposing a change at a fundamental level, which\n> looks more promising (disregarding backpatch burden).\n>\n>\n> https://www.postgresql.org/message-id/01d4fd55-d4fe-5afc-446c-a7f99e043f3d@enterprisedb.com\n\n\nSure I will also take a look at this patch.\n\n+1 for avoiding the backpatching burden.\n\n\n>\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Wed, Jun 30, 2021 at 5:56 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Tue, 29 Jun 2021 20:13:14 +0000, ahsan hadi <ahsan.hadi@gmail.com> wrote in \n> The following review has been posted through the commitfest application:\n> make installcheck-world:  tested, passed\n> Implements feature:       tested, passed\n> Spec compliant:           tested, passed\n> Documentation:            not tested\n> \n> I have tested it with various object types and getting a meaningful error.\n\nThanks for looking this, Ahsan.\n\nHowever, Peter-E is proposing a change at a fundamental level, which\nlooks more promising (disregarding backpatch burden).\n\nhttps://www.postgresql.org/message-id/01d4fd55-d4fe-5afc-446c-a7f99e043f3d@enterprisedb.comSure I will also take a look at this patch. +1 for avoiding the backpatching burden. \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Wed, 30 Jun 2021 13:43:52 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "On Wed, Jun 30, 2021 at 01:43:52PM +0500, Ahsan Hadi wrote:\n> Sure I will also take a look at this patch.\n> \n> +1 for avoiding the backpatching burden.\n\nFrom what I recall of this thread, nobody has really complained about\nthis stuff either, so a backpatch would be off the table. I agree\nthat what Peter E is proposing on the other thread is much more\nsuitable in the long term, as there is no need to worry about multiple\ncombinations of relkinds in error message, so such error strings\nbecome a no-brainer when more relkinds are added.\n--\nMichael", "msg_date": "Fri, 2 Jul 2021 13:20:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "\nOn 02.07.21 06:20, Michael Paquier wrote:\n> On Wed, Jun 30, 2021 at 01:43:52PM +0500, Ahsan Hadi wrote:\n>> Sure I will also take a look at this patch.\n>>\n>> +1 for avoiding the backpatching burden.\n> \n> From what I recall of this thread, nobody has really complained about\n> this stuff either, so a backpatch would be off the table. I agree\n> that what Peter E is proposing on the other thread is much more\n> suitable in the long term, as there is no need to worry about multiple\n> combinations of relkinds in error message, so such error strings\n> become a no-brainer when more relkinds are added.\n\nMy patch is now committed. The issue that started this thread now \nbehaves like this:\n\nALTER TABLE ft1 ATTACH PARTITION ...;\nERROR: ALTER action ATTACH PARTITION cannot be performed on relation \"ft1\"\nDETAIL: This operation is not supported for foreign tables.\n\nSo, for PG15, this is taken care of.\n\nBackpatches under the old style for missing combinations would still be \nin scope, but there my comment on the proposed patches is that I would \nrather not remove apparently unused combinations from back branches.\n\n\n", "msg_date": "Thu, 8 Jul 2021 10:02:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Thu, 8 Jul 2021 10:02:53 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> My patch is now committed. The issue that started this thread now behaves\n> like this:\n> \n> ALTER TABLE ft1 ATTACH PARTITION ...;\n> ERROR: ALTER action ATTACH PARTITION cannot be performed on relation \"ft1\"\n> DETAIL: This operation is not supported for foreign tables.\n> \n> So, for PG15, this is taken care of.\n\nCool.\n\n> Backpatches under the old style for missing combinations would still be in\n> scope, but there my comment on the proposed patches is that I would rather not\n> remove apparently unused combinations from back branches.\n\nSounds reasonable. So the attached are that for PG11-PG14. 11 and 12\nshares the same patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 09 Jul 2021 10:44:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "On Fri, Jul 09, 2021 at 10:44:13AM +0900, Kyotaro Horiguchi wrote:\n> Sounds reasonable. So the attached are that for PG11-PG14. 11 and 12\n> shares the same patch.\n\nHow much do the regression tests published upthread in\nhttps://postgr.es/m/20210219.173039.609314751334535042.horikyota.ntt@gmail.com\napply here? Shouldn't we also have some regression tests for the new\nerror cases you are adding? I agree that we'd better avoid removing\nthose entries, one argument in favor of not removing any entries being\nthat this could have an impact on forks.\n--\nMichael", "msg_date": "Fri, 9 Jul 2021 11:03:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Fri, 9 Jul 2021 11:03:56 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jul 09, 2021 at 10:44:13AM +0900, Kyotaro Horiguchi wrote:\n> > Sounds reasonable. So the attached are that for PG11-PG14. 11 and 12\n> > shares the same patch.\n> \n> How much do the regression tests published upthread in\n> https://postgr.es/m/20210219.173039.609314751334535042.horikyota.ntt@gmail.com\n> apply here? Shouldn't we also have some regression tests for the new\n> error cases you are adding? I agree that we'd better avoid removing\n\nMmm. Ok, I distributed the mother regression test into each version.\n\nPG11, 12:\n\n - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX\n\n Added.\n\n - ATT_TABLE | ATT_PARTITIONED_INDEX\n\n This test doesn't detect the \"is of the wrong type\" issue.\n\n The item is practically a dead one since the combination is caught\n by transformPartitionCmd before visiting ATPrepCmd, which emits a\n bit different error message for the test.\n\n \"\\\"%s\\\" is not a partitioned table or index\"\n\n ATPrepCmd emits an error that:\n\n \"\\\"%s\\\" is not a table or partitioned index\"\n\n Hmm.. somewhat funny. Actually ATT_TABLE is a bit off here but\n there's no symbol ATT_PARTITIONED_TABLE. Theoretically the symbol\n is needed but practically not. I don't think we need to do more\n than that at least for these versions. (Or we don't even need to\n add this item.)\n\nPG13:\n\n - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX\n\n Same to PG12.\n\n - ATT_TABLE | ATT_PARTITIONED_INDEX:\n\n This version raches this item in ATPrepCmd because the commit\n 1281a5c907 moved the parse-transform phase to the ATExec stage,\n which is visited after ATPrepCmd.\n\n On the other hand, when the target relation is a regular table, the\n error is missed by ATPrepCmd then the control reaches to the\n Exec-stage. The error is finally aught by transformPartitionCmd.\n\n Of course this works fine but doesn't seem clean, but it is\n apparently a matter of the master branch.\n\n - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX | ATT_FOREIGN_TABLE\n Added and works as expected.\n\nPG14:\n\n - ATT_INDEX\n\n I noticed that this combination has been reverted by the commit\n ec48314708.\n\n - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX\n - ATT_TABLE | ATT_PARTITIONED_INDEX:\n - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX | ATT_FOREIGN_TABLE\n\n Same as PG13.\n\n So, PG14 and 13 share the same fix and test.\n\n> error cases you are adding? I agree that we'd better avoid removing\n> those entries, one argument in favor of not removing any entries being\n> that this could have an impact on forks.\n\nOk. The attached are the two patchsets for PG14-13 and PG12-11\ncontaining the fix and the regression test.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 09 Jul 2021 21:00:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "On Fri, Jul 09, 2021 at 09:00:31PM +0900, Kyotaro Horiguchi wrote:\n> Mmm. Ok, I distributed the mother regression test into each version.\n\nThanks, my apologies for the late reply. It took me some time to\nanalyze the whole.\n\n> PG11, 12:\n>\n> - ATT_TABLE | ATT_PARTITIONED_INDEX\n> \n> This test doesn't detect the \"is of the wrong type\" issue.\n> \n> The item is practically a dead one since the combination is caught\n> by transformPartitionCmd before visiting ATPrepCmd, which emits a\n> bit different error message for the test.\n\nYes, I was surprised to see this test choke in the utility parsing.\nThere is a good argument in keeping (ATT_TABLE |\nATT_PARTITIONED_INDEX) though. I analyzed the code and I agree that\nit cannot be directly reached, but a future code change on those\nbranches may expose that. And it does not really cost in keeping it\neither.\n\n> PG13:\n> Of course this works fine but doesn't seem clean, but it is\n> apparently a matter of the master branch.\n> \n> - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX | ATT_FOREIGN_TABLE\n> Added and works as expected.\n\nHEAD had its own improvements, and what you have here closes some\nholes of their own, so applied. Thanks!\n--\nMichael", "msg_date": "Wed, 14 Jul 2021 19:55:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." }, { "msg_contents": "At Wed, 14 Jul 2021 19:55:18 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jul 09, 2021 at 09:00:31PM +0900, Kyotaro Horiguchi wrote:\n> > Mmm. Ok, I distributed the mother regression test into each version.\n> \n> Thanks, my apologies for the late reply. It took me some time to\n> analyze the whole.\n..\n> > PG13:\n> > Of course this works fine but doesn't seem clean, but it is\n> > apparently a matter of the master branch.\n> > \n> > - ATT_TABLE | ATT_MATVIEW | ATT_INDEX | ATT_PARTITIONED_INDEX | ATT_FOREIGN_TABLE\n> > Added and works as expected.\n> \n> HEAD had its own improvements, and what you have here closes some\n> holes of their own, so applied. Thanks!\n\nThank you for commiting this!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 15 Jul 2021 13:51:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"ft1\" is of the wrong type." } ]
[ { "msg_contents": "Hi,\r\n\r\nProblem statement:\r\nI have to develop a solution in which a single source populates a table. Once the table is populated, it is considered as read-only and then we run many read-only queries on it.\r\nSuch read-only tables are generated by multiple simulation runs: each simulation populates an independent table, meaning there is no cross-write to the tables.\r\nHowever, the read-only queries can be executed on a single or multiple tables.\r\nIn my environment I have plenty of machines to run the simulations and I can’t use these machines to have a postgres compute farm as a cloud solution. So I can’t use my machines to run endless postgres server jobs as the solution is intended for.\r\n\r\nMy idea is:\r\nStage 1: Ad-hoc server+client for populating a table: start a server+client on a local machine, populate the table and stop the server+client. The data-dir is hosted in a central file system (e.g. NFS).\r\nStage 2: Ad-hoc server+client for querying the now read-only table(s) from step 1: start a server+client on a local machine, run read-only queries and stop the server+client.\r\nIn order to implement stage 2 I will:\r\n1. Create a new ad-hoc empty data-dir\r\n2. Create a soft-link from each data-dirtable files (including its index files) that is needed for the query to the ad-hoc data-dir.\r\nNote that files in multiple data-dirs can be linked to the ad-hoc data-dir.\r\n3. Update postgress catalog tables in the ad-hoc data-dir according to above soft-links\r\n4. To guarantee that there will be no modifications of read-only table files, I will implement a table-am (access methods) which registers ONLY the table-am callback functions that are relevant for running read-only queries.\r\nSince it is possible to run multiple queries on each table, there can be multiple instances of client-server describes in stage 2 running simultaneously.\r\n\r\nAny thoughts?\r\nCan it work?\r\nMy concern is for the process described in stage2#4: can I truly rely on callback functions running read-only queries do not update behind the scene the read-only table files?\r\nAny other suggestion to develop & maintain a sustainable solution?\r\n\r\nThanks,\r\nMaoz\r\n---------------------------------------------------------------------\nIntel Israel (74) Limited\n\nThis e-mail and any attachments may contain confidential material for\nthe sole use of the intended recipient(s). Any review or distribution\nby others is strictly prohibited. If you are not the intended\nrecipient, please contact the sender and delete all copies.\n\n\n\n\n\n\n\n\n\nHi,\n \nProblem statement:\nI have to develop a solution in which a single source populates a table. Once the table is populated, it is considered as read-only and then we run many read-only queries on it.\nSuch read-only tables are generated by multiple simulation runs: each simulation populates an independent table, meaning there is no cross-write to the tables.\nHowever, the read-only queries can be executed on a single or multiple tables.\r\n\nIn my environment I have plenty of machines to run the simulations and I can’t use these machines to have a postgres compute farm as a cloud solution. So I can’t use my machines to run endless postgres server jobs as\r\n the solution is intended for.\n \nMy idea is:\nStage 1: Ad-hoc server+client for populating a table: start a server+client on a local machine, populate the table and stop the server+client. The data-dir is hosted in a central file system (e.g. NFS).\nStage 2: Ad-hoc server+client for querying the now read-only table(s) from step 1:  start a server+client on a local machine, run read-only queries and stop the server+client.\nIn order to implement stage 2 I will:\n1.            Create a new ad-hoc empty data-dir\n2.            Create a soft-link from each data-dirtable files (including its index files) that is needed for the query to the ad-hoc data-dir.\nNote that files in multiple data-dirs can be linked to the ad-hoc data-dir.\n3.            Update postgress catalog tables in the ad-hoc data-dir according to above soft-links\n4.            To guarantee that there will be no modifications of read-only table files, I will implement a table-am (access methods) which registers ONLY the table-am callback functions that are relevant for running\r\n read-only queries.\nSince it is possible to run multiple queries on each table, there can be multiple instances of client-server describes in stage 2 running simultaneously.\n \nAny thoughts? \nCan it work?\nMy concern is for the process described in stage2#4: can I truly rely on callback functions running read-only queries do not update behind the scene the read-only table files?\nAny other suggestion to develop & maintain a sustainable solution?\n \nThanks,\nMaoz\n\n---------------------------------------------------------------------\nIntel Israel (74) Limited\nThis e-mail and any attachments may contain confidential material for\nthe sole use of the intended recipient(s). Any review or distribution\nby others is strictly prohibited. If you are not the intended\nrecipient, please contact the sender and delete all copies.", "msg_date": "Tue, 16 Feb 2021 13:49:02 +0000", "msg_from": "\"Guttman, Maoz\" <maoz.guttman@intel.com>", "msg_from_op": true, "msg_subject": "How to customize postgres for sharing read-only tables in multiple\n data-dirs between servers" }, { "msg_contents": "On Tue, Feb 16, 2021 at 01:49:02PM +0000, Guttman, Maoz wrote:\n> Hi,\n> \n> Problem statement:\n> I have to develop a solution in which a single source populates a table. Once the table is populated, it is considered as read-only and then we run many read-only queries on it.\n> Such read-only tables are generated by multiple simulation runs: each simulation populates an independent table, meaning there is no cross-write to the tables.\n\nI don't think you can share data dir or its files across clusters.\nThings like autovacuum will want to be able to write to the data dir.\n\nI don't think you can create a table-am which handles only \"read-only\"\noperations - currently they're all required.\n\nDid you think about using FDWs for this, instead ?\nOtherwise, maybe you could make a copy of the cluster (like with a filesystem\nclone) or pg-basebackup, or you could use replication.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Feb 2021 12:39:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: How to customize postgres for sharing read-only tables in\n multiple data-dirs between servers" } ]
[ { "msg_contents": "The SQL standard defines a function called TRIM_ARRAY that surprisingly\nhas syntax that looks like a function! So I implemented it using a thin\nwrapper around our array slice syntax. It is literally just ($1)[1:$2].\n\nAn interesting case that I decided to handle by explaining it in the\ndocs is that this won't give you the first n elements if your lower\nbound is not 1. My justification for this is 1) non-standard lower\nbounds are so rare in the wild that 2) people using them can just not\nuse this function. The alternative is to go through the unnest dance\n(or write it in C) which defeats inlining.\n\nPatch attached.\n-- \nVik Fearing", "msg_date": "Tue, 16 Feb 2021 18:54:11 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "TRIM_ARRAY" }, { "msg_contents": "On Tue, 16 Feb 2021 at 12:54, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> The SQL standard defines a function called TRIM_ARRAY that surprisingly\n> has syntax that looks like a function! So I implemented it using a thin\n> wrapper around our array slice syntax. It is literally just ($1)[1:$2].\n>\n> An interesting case that I decided to handle by explaining it in the\n> docs is that this won't give you the first n elements if your lower\n> bound is not 1. My justification for this is 1) non-standard lower\n> bounds are so rare in the wild that 2) people using them can just not\n> use this function. The alternative is to go through the unnest dance\n> (or write it in C) which defeats inlining.\n>\n\nI don't recall ever seeing non-default lower bounds, so I actually think\nit's OK to just rule out that scenario, but why not something like this:\n\n($1)[:array_lower ($1, 1) + $2 - 1]\n\nNote that I've used the 9.6 feature that allows omitting the lower bound.\n\nOn Tue, 16 Feb 2021 at 12:54, Vik Fearing <vik@postgresfriends.org> wrote:The SQL standard defines a function called TRIM_ARRAY that surprisingly\nhas syntax that looks like a function!  So I implemented it using a thin\nwrapper around our array slice syntax.  It is literally just ($1)[1:$2].\n\nAn interesting case that I decided to handle by explaining it in the\ndocs is that this won't give you the first n elements if your lower\nbound is not 1.  My justification for this is 1) non-standard lower\nbounds are so rare in the wild that 2) people using them can just not\nuse this function.  The alternative is to go through the unnest dance\n(or write it in C) which defeats inlining.I don't recall ever seeing non-default lower bounds, so I actually think it's OK to just rule out that scenario, but why not something like this:($1)[:array_lower ($1, 1) + $2 - 1]Note that I've used the 9.6 feature that allows omitting the lower bound.", "msg_date": "Tue, 16 Feb 2021 13:32:47 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On 2/16/21 7:32 PM, Isaac Morland wrote:\n> On Tue, 16 Feb 2021 at 12:54, Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> The SQL standard defines a function called TRIM_ARRAY that surprisingly\n>> has syntax that looks like a function! So I implemented it using a thin\n>> wrapper around our array slice syntax. It is literally just ($1)[1:$2].\n>>\n>> An interesting case that I decided to handle by explaining it in the\n>> docs is that this won't give you the first n elements if your lower\n>> bound is not 1. My justification for this is 1) non-standard lower\n>> bounds are so rare in the wild that 2) people using them can just not\n>> use this function. The alternative is to go through the unnest dance\n>> (or write it in C) which defeats inlining.\n>>\n> \n> I don't recall ever seeing non-default lower bounds, so I actually think\n> it's OK to just rule out that scenario, but why not something like this:\n> \n> ($1)[:array_lower ($1, 1) + $2 - 1]\n\nI'm kind of embarrassed that I didn't think about doing that; it is a\nmuch better solution. You lose the non-standard bounds but I don't\nthink there is any way besides C to keep the lower bound regardless of\nhow you trim it.\n\nV2 attached.\n-- \nVik Fearing", "msg_date": "Tue, 16 Feb 2021 23:38:30 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On 2/16/21 11:38 PM, Vik Fearing wrote:\n> On 2/16/21 7:32 PM, Isaac Morland wrote:\n>> On Tue, 16 Feb 2021 at 12:54, Vik Fearing <vik@postgresfriends.org> wrote:\n>>\n>>> The SQL standard defines a function called TRIM_ARRAY that surprisingly\n>>> has syntax that looks like a function! So I implemented it using a thin\n>>> wrapper around our array slice syntax. It is literally just ($1)[1:$2].\n>>>\n>>> An interesting case that I decided to handle by explaining it in the\n>>> docs is that this won't give you the first n elements if your lower\n>>> bound is not 1. My justification for this is 1) non-standard lower\n>>> bounds are so rare in the wild that 2) people using them can just not\n>>> use this function. The alternative is to go through the unnest dance\n>>> (or write it in C) which defeats inlining.\n>>>\n>>\n>> I don't recall ever seeing non-default lower bounds, so I actually think\n>> it's OK to just rule out that scenario, but why not something like this:\n>>\n>> ($1)[:array_lower ($1, 1) + $2 - 1]\n> \n> I'm kind of embarrassed that I didn't think about doing that; it is a\n> much better solution. You lose the non-standard bounds but I don't\n> think there is any way besides C to keep the lower bound regardless of\n> how you trim it.\n\nI've made a bit of a mess out of this, but I partly blame the standard\nwhich is very unclear. It actually describes trimming the right n\nelements instead of the left n like I've done here. I'll be back later\nwith a better patch that does what it's actually supposed to.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 17 Feb 2021 01:25:52 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On 2/17/21 1:25 AM, Vik Fearing wrote:\n\n> I've made a bit of a mess out of this, but I partly blame the standard\n> which is very unclear. It actually describes trimming the right n\n> elements instead of the left n like I've done here. I'll be back later\n> with a better patch that does what it's actually supposed to.\n\nAnd here is that patch.\n\nSince the only justification I have for such a silly function is that\nit's part of the standard, I decided to also issue the errors that the\nstandard describes which means the new function is now in C.\n-- \nVik Fearing", "msg_date": "Sun, 21 Feb 2021 03:09:05 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nThis basically does what it says, and the code looks good. The\r\ndocumentation is out of alphabetical order (trim_array should appear\r\nunder cardinality rather than over)) but good otherwise. I was able to\r\n\"break\" the function with an untyped null in psql:\r\n\r\nselect trim_array(null, 2);\r\nERROR: could not determine polymorphic type because input has type unknown\r\n\r\nI don't know whether there are any circumstances other than manual entry\r\nin psql where this could happen, since column values and variables will\r\nalways be typed. I don't have access to the standard, but DB2's docs[1]\r\nnote \"if any argument is null, the result is the null value\", so an\r\nup-front null check might be preferable to a slightly arcane user-facing\r\nerror, even if it's a silly invocation of a silly function :)\r\n\r\n[1] https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_trimarray.html\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Mon, 01 Mar 2021 23:14:39 +0000", "msg_from": "Dian Fay <dian.m.fay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On 3/2/21 12:14 AM, Dian Fay wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, failed\n> Documentation: tested, failed\n\nThank you for looking at my patch!\n\n> This basically does what it says, and the code looks good. The\n> documentation is out of alphabetical order (trim_array should appear\n> under cardinality rather than over)) but good otherwise.\n\nHmm. It appears between cardinality and unnest in the source code and\nalso my compiled html. Can you say more about where you're seeing the\nwrong order?\n\n> I was able to\n> \"break\" the function with an untyped null in psql:\n> \n> select trim_array(null, 2);\n> ERROR: could not determine polymorphic type because input has type unknown\n> \n> I don't know whether there are any circumstances other than manual entry\n> in psql where this could happen, since column values and variables will\n> always be typed. I don't have access to the standard, but DB2's docs[1]\n> note \"if any argument is null, the result is the null value\", so an\n> up-front null check might be preferable to a slightly arcane user-facing\n> error, even if it's a silly invocation of a silly function :)\n> \n> [1] https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_trimarray.html\n\nThe standard also says that if either argument is null, the result is\nnull. The problem here is that postgres needs to know what the return\ntype is and it can only determine that from the input.\n\nIf you give the function a typed null, it returns null as expected.\n\n> The new status of this patch is: Waiting on Author\n\nI put it back to Needs Review without a new patch because I don't know\nwhat I would change.\n-- \nVik Fearing\n\n\n", "msg_date": "Tue, 2 Mar 2021 00:53:04 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "Dian Fay <dian.m.fay@gmail.com> writes:\n> This basically does what it says, and the code looks good. The\n> documentation is out of alphabetical order (trim_array should appear\n> under cardinality rather than over)) but good otherwise. I was able to\n> \"break\" the function with an untyped null in psql:\n\n> select trim_array(null, 2);\n> ERROR: could not determine polymorphic type because input has type unknown\n\nThat's a generic parser behavior for polymorphic functions, not something\nthis particular function could or should dodge.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Mar 2021 18:53:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On Mon Mar 1, 2021 at 6:53 PM EST, Vik Fearing wrote:\n> > This basically does what it says, and the code looks good. The\n> > documentation is out of alphabetical order (trim_array should appear\n> > under cardinality rather than over)) but good otherwise.\n>\n> Hmm. It appears between cardinality and unnest in the source code and\n> also my compiled html. Can you say more about where you're seeing the\n> wrong order?\n\nI applied the patch to the latest commit, ffd3944ab9. Table 9.52 is\nordered:\n\narray_to_string\narray_upper\ntrim_array\ncardinality\nunnest\n\n> The problem here is that postgres needs to know what the return\n> type is and it can only determine that from the input.\n>\n> If you give the function a typed null, it returns null as expected.\n>\n> > The new status of this patch is: Waiting on Author\n>\n> I put it back to Needs Review without a new patch because I don't know\n> what I would change.\n\nI'd thought that checking v and returning null instead of raising the\nerror would be more friendly, should it be possible to pass an untyped\nnull accidentally instead of on purpose, and I couldn't rule that out.\nI've got no objections other than the docs having been displaced.\n\n\n", "msg_date": "Mon, 01 Mar 2021 19:02:50 -0500", "msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>", "msg_from_op": false, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On 3/2/21 1:02 AM, Dian M Fay wrote:\n> On Mon Mar 1, 2021 at 6:53 PM EST, Vik Fearing wrote:\n>>> This basically does what it says, and the code looks good. The\n>>> documentation is out of alphabetical order (trim_array should appear\n>>> under cardinality rather than over)) but good otherwise.\n>>\n>> Hmm. It appears between cardinality and unnest in the source code and\n>> also my compiled html. Can you say more about where you're seeing the\n>> wrong order?\n> \n> I applied the patch to the latest commit, ffd3944ab9. Table 9.52 is\n> ordered:\n> \n> array_to_string\n> array_upper\n> trim_array\n> cardinality\n> unnest\n\nSo it turns out I must have fixed it locally after I posted the patch\nand then forgot I did that. Attached is a new patch with the order\ncorrect. Thanks for spotting it!\n\n>> The problem here is that postgres needs to know what the return\n>> type is and it can only determine that from the input.\n>>\n>> If you give the function a typed null, it returns null as expected.\n>>\n>>> The new status of this patch is: Waiting on Author\n>>\n>> I put it back to Needs Review without a new patch because I don't know\n>> what I would change.\n> \n> I'd thought that checking v and returning null instead of raising the\n> error would be more friendly, should it be possible to pass an untyped\n> null accidentally instead of on purpose, and I couldn't rule that out.\n\nAs Tom said, that is something that does not belong in this patch.\n-- \nVik Fearing", "msg_date": "Tue, 2 Mar 2021 02:31:06 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 3/2/21 1:02 AM, Dian M Fay wrote:\n>> I'd thought that checking v and returning null instead of raising the\n>> error would be more friendly, should it be possible to pass an untyped\n>> null accidentally instead of on purpose, and I couldn't rule that out.\n\n> As Tom said, that is something that does not belong in this patch.\n\nYeah, the individual function really doesn't have any way to affect\nthis, since the error happens on the way to identifying which function\nwe should call in the first place.\n\nI had the same problem as Dian of the func.sgml hunk winding up in\nthe wrong place. I think this is practically inevitable unless the\nsubmitter uses more than 3 lines of context for the diff, because\notherwise the context is just boilerplate that looks the same\neverywhere in the function tables. Unless the diff is 100% up to date\nso that the line numbers are exactly right, patch is likely to guess\nwrong about where to insert the new hunk. We'll just have to be\nvigilant about that.\n\nI fooled with your test case a bit ... I didn't think it was really\nnecessary to create and drop a table, when we could just use a VALUES\nclause as source of test data. Also you'd forgotten to update the\n\"descr\" description of the function to match the final understanding\nof the semantics.\n\nLooks good otherwise, so pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Mar 2021 16:47:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: TRIM_ARRAY" }, { "msg_contents": "On 3/3/21 10:47 PM, Tom Lane wrote:\n> \n> I had the same problem as Dian of the func.sgml hunk winding up in\n> the wrong place. I think this is practically inevitable unless the\n> submitter uses more than 3 lines of context for the diff, because\n> otherwise the context is just boilerplate that looks the same\n> everywhere in the function tables. Unless the diff is 100% up to date\n> so that the line numbers are exactly right, patch is likely to guess\n> wrong about where to insert the new hunk. We'll just have to be\n> vigilant about that.\n\nNoted.\n\n> I fooled with your test case a bit ... I didn't think it was really\n> necessary to create and drop a table, when we could just use a VALUES\n> clause as source of test data. Also you'd forgotten to update the\n> \"descr\" description of the function to match the final understanding\n> of the semantics.\n\nThank you.\n\n> Looks good otherwise, so pushed.\n\nThanks!\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 3 Mar 2021 23:03:46 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: TRIM_ARRAY" } ]
[ { "msg_contents": "Hey, all,\n\nThe configuration parameter max_replication_slots is most notably used\nto control how many replication slots can be created on a server, but it\nalso controls how many replication origins can be tracked on the\nsubscriber side.\n\nThis is noted in the Configuration Settings section in the Logical\nReplication Chapter [1], but it is not mentioned in the documentation\nthe parameter itself [2].\n\nThe attached patch adds an extra paragraph explaining its effect on\nsubscribers.\n\n\nUsing max_replication_slots for sizing the number available of\nreplication origin states is a little odd, and is actually noted twice\nin the source code [3] [4]:\n\n> XXX: Should we use a separate variable to size this rather than\n> max_replication_slots?\n\n> XXX: max_replication_slots is arguably the wrong thing to use, as here\n> we keep the replay state of *remote* transactions. But for now it\n> seems sufficient to reuse it, rather than introduce a separate GUC.\n\nThis is a different usage of max_replication_slots than originally\nintended, managing resource usage on the subscriber side, rather than\nthe provider side. This manifests itself in the awkwardness of the\ndocumentation, where max_replication_slots is only listed in the Sending\nServer section, and not mentioned in the Subscribers section.\n\nGiven this, I think introducing a new parameter would make sense\n(max_replication_origins? slightly confusing because there's no limit on\nthe number of records in pg_replication_origins; tracking of replication\norigins is displayed in pg_replication_origin_status).\n\nI'd be happy to make a patch for a new GUC parameter, if people think\nit's worth it to separate the functionality. Until then, however, the\naddition to the documentation should help prevent confusion.\n\n\n- Paul\n\n[1]: https://www.postgresql.org/docs/13/logical-replication-config.html\n[2]: https://www.postgresql.org/docs/13/runtime-config-replication.html#GUC-MAX-REPLICATION-SLOTS\n[3]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/logical/origin.c;h=685eaa6134e7cad193b583ff28284d877a6d8055;hb=HEAD#l162\n[4]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/logical/origin.c;h=685eaa6134e7cad193b583ff28284d877a6d8055;hb=HEAD#l495", "msg_date": "Tue, 16 Feb 2021 13:03:53 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "[PATCH] Note effect of max_replication_slots on subscriber side in\n documentation." }, { "msg_contents": "Hey, all,\n\nI went ahead and made a patch for introducing a new GUC variable,\nmax_replication_origins, to replace the awkward re-use of\nmax_replication_slots.\n\nI'm mostly indifferent whether a new GUC variable is necessary, or\nsimply just updating the existing documentation (the first patch I\nsent) is sufficient, but one of them should definitely be done to\nclear up the confusion.\n\n- Paul\n\nOn Tue, Feb 16, 2021 at 1:03 PM Paul Martinez <paulmtz@google.com> wrote:\n\n> Hey, all,\n>\n> The configuration parameter max_replication_slots is most notably used\n> to control how many replication slots can be created on a server, but it\n> also controls how many replication origins can be tracked on the\n> subscriber side.\n>\n> This is noted in the Configuration Settings section in the Logical\n> Replication Chapter [1], but it is not mentioned in the documentation\n> the parameter itself [2].\n>\n> The attached patch adds an extra paragraph explaining its effect on\n> subscribers.\n>\n>\n> Using max_replication_slots for sizing the number available of\n> replication origin states is a little odd, and is actually noted twice\n> in the source code [3] [4]:\n>\n> > XXX: Should we use a separate variable to size this rather than\n> > max_replication_slots?\n>\n> > XXX: max_replication_slots is arguably the wrong thing to use, as here\n> > we keep the replay state of *remote* transactions. But for now it\n> > seems sufficient to reuse it, rather than introduce a separate GUC.\n>\n> This is a different usage of max_replication_slots than originally\n> intended, managing resource usage on the subscriber side, rather than\n> the provider side. This manifests itself in the awkwardness of the\n> documentation, where max_replication_slots is only listed in the Sending\n> Server section, and not mentioned in the Subscribers section.\n>\n> Given this, I think introducing a new parameter would make sense\n> (max_replication_origins? slightly confusing because there's no limit on\n> the number of records in pg_replication_origins; tracking of replication\n> origins is displayed in pg_replication_origin_status).\n>\n> I'd be happy to make a patch for a new GUC parameter, if people think\n> it's worth it to separate the functionality. Until then, however, the\n> addition to the documentation should help prevent confusion.\n>\n>\n> - Paul\n>\n> [1]: https://www.postgresql.org/docs/13/logical-replication-config.html\n> [2]:\n> https://www.postgresql.org/docs/13/runtime-config-replication.html#GUC-MAX-REPLICATION-SLOTS\n> [3]:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/logical/origin.c;h=685eaa6134e7cad193b583ff28284d877a6d8055;hb=HEAD#l162\n> [4]:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/logical/origin.c;h=685eaa6134e7cad193b583ff28284d877a6d8055;hb=HEAD#l495\n>", "msg_date": "Wed, 24 Feb 2021 11:55:05 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Thu, Feb 25, 2021 at 2:19 AM Paul Martinez <paulmtz@google.com> wrote:\n>\n> Hey, all,\n>\n> I went ahead and made a patch for introducing a new GUC variable,\n> max_replication_origins, to replace the awkward re-use of\n> max_replication_slots.\n>\n> I'm mostly indifferent whether a new GUC variable is necessary, or\n> simply just updating the existing documentation (the first patch I\n> sent) is sufficient, but one of them should definitely be done to\n> clear up the confusion.\n>\n\n+1. I also think one of them is required. I think users who are using\ncascaded replication (means subscribers are also publishers), setting\nthis parameter might be a bit confusing and difficult. Anybody else\nhas an opinion on this matter?\n\nFor docs only patch, I have few suggestions:\n1. On page [1], it is not very clear that we are suggesting to set\nmax_replication_slots for origins whereas your new doc patch has\nclarified it, can we update the other page as well.\n2.\nSetting it a lower value than the current\n+ number of tracked replication origins (reflected in\n+ <link\nlinkend=\"view-pg-replication-origin-status\">pg_replication_origin_status</link>,\n+ not <link\nlinkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>)\n+ will prevent the server from starting.\n+ </para>\n\nWhy can't we just mention pg_replication_origin above?\n\n[1] - https://www.postgresql.org/docs/13/logical-replication-config.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Feb 2021 19:01:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Thu, Feb 25, 2021 at 5:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> For docs only patch, I have few suggestions:\n> 1. On page [1], it is not very clear that we are suggesting to set\n> max_replication_slots for origins whereas your new doc patch has\n> clarified it, can we update the other page as well.\n\nSorry, what other page are you referring to?\n\n\n> 2.\n> Setting it a lower value than the current\n> + number of tracked replication origins (reflected in\n> + <link\n> linkend=\"view-pg-replication-origin-status\">pg_replication_origin_status</link>,\n> + not <link\n> linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>)\n> + will prevent the server from starting.\n> + </para>\n>\n> Why can't we just mention pg_replication_origin above?\n>\n\nSo this is slightly confusing:\n\npg_replication_origin just contains mappings from origin names to oids.\nIt is regular catalog table and has no limit on its size. Users can also\nmanually insert rows into this table.\n\nhttps://www.postgresql.org/docs/13/catalog-pg-replication-origin.html\n\nThe view showing the in-memory information is actually\npg_replication_origin_status. The number of entries here is what is\nactually constrained by the GUC parameter.\n\nhttps://www.postgresql.org/docs/13/view-pg-replication-origin-status.html\n\nI clarified pointing to pg_replication_origin_status because it could in\ntheory be out of sync with pg_replication_origin. I'm actually not sure\nhow entries there are managed. Perhaps if you were replicating from one\ndatabase and then stopped and started replicating from another database\nyou'd have two replication origins, but only one replication origin\nstatus?\n\n\nThis also brings up a point regarding the naming of the added GUC.\nmax_replication_origins is cleanest, but has this confusion regarding\npg_replication_origin vs. pg_replication_origin_status.\nmax_replication_origin_statuses is weird (and long).\nmax_tracked_replication_origins is a possibility?\n\n(One last bit of naming confusion; the internal code refers to them as\nReplicationStates, rather than ReplicationOrigins or\nReplicationOriginStatuses, or something like that.)\n\n\n- Paul\n\n\n", "msg_date": "Thu, 25 Feb 2021 12:22:58 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Fri, Feb 26, 2021 at 1:53 AM Paul Martinez <paulmtz@google.com> wrote:\n>\n> On Thu, Feb 25, 2021 at 5:31 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > For docs only patch, I have few suggestions:\n> > 1. On page [1], it is not very clear that we are suggesting to set\n> > max_replication_slots for origins whereas your new doc patch has\n> > clarified it, can we update the other page as well.\n>\n> Sorry, what other page are you referring to?\n>\n\nhttps://www.postgresql.org/docs/devel/logical-replication-config.html\n\n>\n> > 2.\n> > Setting it a lower value than the current\n> > + number of tracked replication origins (reflected in\n> > + <link\n> > linkend=\"view-pg-replication-origin-status\">pg_replication_origin_status</link>,\n> > + not <link\n> > linkend=\"catalog-pg-replication-origin\">pg_replication_origin</link>)\n> > + will prevent the server from starting.\n> > + </para>\n> >\n> > Why can't we just mention pg_replication_origin above?\n> >\n>\n> So this is slightly confusing:\n>\n> pg_replication_origin just contains mappings from origin names to oids.\n> It is regular catalog table and has no limit on its size. Users can also\n> manually insert rows into this table.\n>\n> https://www.postgresql.org/docs/13/catalog-pg-replication-origin.html\n>\n> The view showing the in-memory information is actually\n> pg_replication_origin_status. The number of entries here is what is\n> actually constrained by the GUC parameter.\n>\n\nOkay, that makes sense. However, I have sent a patch today (see [1])\nwhere I have slightly updated the subscriber-side configuration\nparagraph. From PG-14 onwards, table synchronization workers also use\norigins on subscribers, so you might want to adjust.\n\n>\n>\n> This also brings up a point regarding the naming of the added GUC.\n> max_replication_origins is cleanest, but has this confusion regarding\n> pg_replication_origin vs. pg_replication_origin_status.\n> max_replication_origin_statuses is weird (and long).\n> max_tracked_replication_origins is a possibility?\n>\n\nor maybe max_replication_origin_states. I guess we can leave adding\nGUC to some other day as that might require a bit broader acceptance\nand we are already near to the start of last CF. I think we can still\nconsider it if we few more people share the same opinion as yours.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KkbppndxxRKbaT2sXrLkdPwy44F4pjEZ0EDrVjD9MPjQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Feb 2021 18:52:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Fri, Feb 26, 2021 at 5:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> https://www.postgresql.org/docs/devel/logical-replication-config.html\n>\n\nAh, yep. I added a clause to the end of the sentence to clarify why we're\nusing max_replication_slots here:\n\n- The subscriber also requires the max_replication_slots to be set.\n\n+ The subscriber also requires that max_replication_slots be set to\n+ configure how many replication origins can be tracked.\n\n>\n> Okay, that makes sense. However, I have sent a patch today (see [1])\n> where I have slightly updated the subscriber-side configuration\n> paragraph. From PG-14 onwards, table synchronization workers also use\n> origins on subscribers, so you might want to adjust.\n>\n> ...\n>\n> I guess we can leave adding GUC to some other day as that might\n> require a bit broader acceptance and we are already near to the start\n> of last CF. I think we can still consider it if we few more people\n> share the same opinion as yours.\n>\n\nGreat. I'll wait to update the GUC patch until your patch and/or my\ndoc-only patch get merged. Should I add it to the March CF?\n\nSeparate question: are documentation updates like these ever backported\nto older versions that are still supported? And if so, would the changes\nbe reflected immediately, or would they require a minor point release?\nWhen I was on an older release I found that I'd jump back and forth\nbetween the version I was using and the latest version to see if\nanything had changed.\n\n\n- Paul", "msg_date": "Fri, 26 Feb 2021 13:16:31 -0800", "msg_from": "Paul Martinez <paulmtz@google.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Sat, Feb 27, 2021 at 2:47 AM Paul Martinez <paulmtz@google.com> wrote:\n>\n> On Fri, Feb 26, 2021 at 5:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > https://www.postgresql.org/docs/devel/logical-replication-config.html\n> >\n>\n> Ah, yep. I added a clause to the end of the sentence to clarify why we're\n> using max_replication_slots here:\n>\n> - The subscriber also requires the max_replication_slots to be set.\n>\n> + The subscriber also requires that max_replication_slots be set to\n> + configure how many replication origins can be tracked.\n>\n\nLGTM.\n\n> >\n> > Okay, that makes sense. However, I have sent a patch today (see [1])\n> > where I have slightly updated the subscriber-side configuration\n> > paragraph. From PG-14 onwards, table synchronization workers also use\n> > origins on subscribers, so you might want to adjust.\n> >\n> > ...\n> >\n> > I guess we can leave adding GUC to some other day as that might\n> > require a bit broader acceptance and we are already near to the start\n> > of last CF. I think we can still consider it if we few more people\n> > share the same opinion as yours.\n> >\n>\n> Great. I'll wait to update the GUC patch until your patch and/or my\n> doc-only patch get merged. Should I add it to the March CF?\n>\n\nWhich patch are you asking about doc-patch or GUC one? If you are\nasking for a doc-patch, then I don't think it is required, I'll take\ncare of this sometime next week. For the GUC patch, my suggestion\nwould be to propose for v15 with an appropriate use-case. At this\npoint (just before the last CF of release), people are mostly busy\nwith patches that are going on for a long time so this might not get\ndue attention unless few people show-up and say it is important.\nHowever, it is up to you, if you want feel free to register your GUC\npatch in the upcoming CF.\n\n> Separate question: are documentation updates like these ever backported\n> to older versions that are still supported?\n>\n\nNot every doc-change is back-ported but I think it is good to backport\nthe user-visible ones. It is on a case-by-case basis. For this, I\nthink we can backport unless you or others feel otherwise?\n\n> And if so, would the changes\n> be reflected immediately, or would they require a minor point release?\n>\n\nWhere you are referring to the docs? If you are checking from code, it\nwill be reflected immediately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 27 Feb 2021 14:35:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Sat, Feb 27, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 27, 2021 at 2:47 AM Paul Martinez <paulmtz@google.com> wrote:\n> >\n> > On Fri, Feb 26, 2021 at 5:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > https://www.postgresql.org/docs/devel/logical-replication-config.html\n> > >\n> >\n> > Ah, yep. I added a clause to the end of the sentence to clarify why we're\n> > using max_replication_slots here:\n> >\n> > - The subscriber also requires the max_replication_slots to be set.\n> >\n> > + The subscriber also requires that max_replication_slots be set to\n> > + configure how many replication origins can be tracked.\n> >\n>\n> LGTM.\n>\n\nThe rebased version attached. As mentioned earlier, I think we can\nbackpatch this patch as this clarifies the already existing behavior.\nDo let me know if you or others think otherwise?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 1 Mar 2021 17:32:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." }, { "msg_contents": "On Mon, Mar 1, 2021 at 5:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 27, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Feb 27, 2021 at 2:47 AM Paul Martinez <paulmtz@google.com> wrote:\n> > >\n> > > On Fri, Feb 26, 2021 at 5:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > https://www.postgresql.org/docs/devel/logical-replication-config.html\n> > > >\n> > >\n> > > Ah, yep. I added a clause to the end of the sentence to clarify why we're\n> > > using max_replication_slots here:\n> > >\n> > > - The subscriber also requires the max_replication_slots to be set.\n> > >\n> > > + The subscriber also requires that max_replication_slots be set to\n> > > + configure how many replication origins can be tracked.\n> > >\n> >\n> > LGTM.\n> >\n>\n> The rebased version attached.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Mar 2021 14:31:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Note effect of max_replication_slots on subscriber side\n in documentation." } ]
[ { "msg_contents": "Hi -\n\nI saw that one of our commitfest entries (32/2914) is recently\nreporting a fail on the cfbot site [1]. I thought this was all ok a\nfew days ago.\n\nWe can see the test log indicating what was the test that failed [2]\nTest Summary Report\n-------------------\nt/002_twophase_streaming.pl (Wstat: 7424 Tests: 1 Failed: 0)\n Non-zero exit status: 29\n Parse errors: Bad plan. You planned 2 tests but ran 1.\nFiles=2, Tests=3, 4 wallclock secs ( 0.03 usr 0.00 sys + 1.59 cusr\n0.81 csys = 2.44 CPU)\nResult: FAIL\ngmake[2]: *** [../../src/makefiles/pgxs.mk:440: check] Error 1\ngmake[1]: *** [Makefile:94: check-test_decoding-recurse] Error 2\ngmake: *** [GNUmakefile:71: check-world-contrib-recurse] Error 2\n*** Error code 2\n\n\nIs there any other detailed information available anywhere, e.g.\nlogs?, which might help us work out what was the cause of the test\nfailure?\n\nThankyou.\n\n---\n[1] http://cfbot.cputube.org/\n[2] https://api.cirrus-ci.com/v1/task/5352561114873856/logs/test.log\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 17 Feb 2021 20:49:26 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Finding cause of test fails on the cfbot site" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> I saw that one of our commitfest entries (32/2914) is recently\n> reporting a fail on the cfbot site [1]. I thought this was all ok a\n> few days ago.\n> ...\n> Is there any other detailed information available anywhere, e.g.\n> logs?, which might help us work out what was the cause of the test\n> failure?\n\nAFAIK the cfbot doesn't capture anything beyond the session typescript.\nHowever, this doesn't look that hard to reproduce locally ... have you\ntried, using similar configure options to what that cfbot run did?\nOnce you did reproduce it, there'd be logs under\ncontrib/test_decoding/tmp_check/.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Feb 2021 11:06:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "\nOn 2/17/21 11:06 AM, Tom Lane wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n>> I saw that one of our commitfest entries (32/2914) is recently\n>> reporting a fail on the cfbot site [1]. I thought this was all ok a\n>> few days ago.\n>> ...\n>> Is there any other detailed information available anywhere, e.g.\n>> logs?, which might help us work out what was the cause of the test\n>> failure?\n> AFAIK the cfbot doesn't capture anything beyond the session typescript.\n> However, this doesn't look that hard to reproduce locally ... have you\n> tried, using similar configure options to what that cfbot run did?\n> Once you did reproduce it, there'd be logs under\n> contrib/test_decoding/tmp_check/.\n>\n> \t\t\t\n\n\n\nyeah. The cfbot runs check-world which makes it difficult for it to know\nwhich log files to show when there's an error. That's a major part of\nthe reason the buildfarm runs a much finer grained set of steps.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 17 Feb 2021 15:18:02 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On Thu, Feb 18, 2021 at 9:18 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2/17/21 11:06 AM, Tom Lane wrote:\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> >> I saw that one of our commitfest entries (32/2914) is recently\n> >> reporting a fail on the cfbot site [1]. I thought this was all ok a\n> >> few days ago.\n> >> ...\n> >> Is there any other detailed information available anywhere, e.g.\n> >> logs?, which might help us work out what was the cause of the test\n> >> failure?\n> > AFAIK the cfbot doesn't capture anything beyond the session typescript.\n> > However, this doesn't look that hard to reproduce locally ... have you\n> > tried, using similar configure options to what that cfbot run did?\n> > Once you did reproduce it, there'd be logs under\n> > contrib/test_decoding/tmp_check/.\n>\n> yeah. The cfbot runs check-world which makes it difficult for it to know\n> which log files to show when there's an error. That's a major part of\n> the reason the buildfarm runs a much finer grained set of steps.\n\nYeah, it's hard to make it print out just the right logs without\ndumping so much stuff that it's hard to see the wood for the trees;\nperhaps if the Makefile had an option to dump relevant stuff for the\nspecific tests that failed, or perhaps the buildfarm is already better\nat that and cfbot should just use the buildfarm client directly. Hmm.\nAnother idea would be to figure out how to make a tarball of all log\nfiles that you can download for inspection with better tools at home\nwhen things go wrong. It would rapidly blow through the 1GB limit for\nstored \"artefacts\" on open source/community Cirrus accounts though, so\nwe'd need to figure out how to manage retention carefully.\n\nFor what it's worth, I tried to reproduce this particular on a couple\nof systems, many times, with no luck. It doesn't look like a freak CI\nfailure (there have been a few random terminations I can't explain\nrecently, but they look different, I think there was a Google Compute\nEngine outage that might explain that), and it failed in exactly the\nsame way on Linux and FreeBSD. I tried locally on FreeBSD, on top of\ncommit a975ff4980d60f8cbd8d8cbcff70182ea53e787a (which is what the\nlast cfbot run did), because it conflicts with a recent change so it\ndoesn't apply on the tip of master right now.\n\n\n", "msg_date": "Thu, 18 Feb 2021 09:42:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On Thu, Feb 18, 2021 at 9:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... (there have been a few random terminations I can't explain\n> recently, but they look different, I think there was a Google Compute\n> Engine outage that might explain that), ...\n\nThere's also occasionally a failure like this[1], on FreeBSD only:\n\nt/001_stream_rep.pl .................. ok\nt/002_archiving.pl ................... ok\nt/003_recovery_targets.pl ............ ok\nBailout called. Further testing stopped: system pg_basebackup failed\nFAILED--Further testing stopped: system pg_basebackup failed\n\nI have a sneaking suspicion this is a real problem on master, but more\nlogs will be needed to guess more about that...\n\n[1] https://cirrus-ci.com/task/5046982650626048?command=test#L733\n\n\n", "msg_date": "Thu, 18 Feb 2021 11:54:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On Thu, Feb 18, 2021 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Feb 18, 2021 at 9:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > ... (there have been a few random terminations I can't explain\n> > recently, but they look different, I think there was a Google Compute\n> > Engine outage that might explain that), ...\n>\n> There's also occasionally a failure like this[1], on FreeBSD only:\n>\n> t/001_stream_rep.pl .................. ok\n> t/002_archiving.pl ................... ok\n> t/003_recovery_targets.pl ............ ok\n> Bailout called. Further testing stopped: system pg_basebackup failed\n> FAILED--Further testing stopped: system pg_basebackup failed\n>\n> I have a sneaking suspicion this is a real problem on master, but more\n> logs will be needed to guess more about that...\n>\n> [1] https://cirrus-ci.com/task/5046982650626048?command=test#L733\n\nThanks for all the effort spent into looking at this.\n\nMeanwhile, since you pointed out the patch is not applying on the HEAD\ntip I can at least address that.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Feb 2021 10:48:23 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "\nOn 2/17/21 3:42 PM, Thomas Munro wrote:\n> On Thu, Feb 18, 2021 at 9:18 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2/17/21 11:06 AM, Tom Lane wrote:\n>>> Peter Smith <smithpb2250@gmail.com> writes:\n>>>> I saw that one of our commitfest entries (32/2914) is recently\n>>>> reporting a fail on the cfbot site [1]. I thought this was all ok a\n>>>> few days ago.\n>>>> ...\n>>>> Is there any other detailed information available anywhere, e.g.\n>>>> logs?, which might help us work out what was the cause of the test\n>>>> failure?\n>>> AFAIK the cfbot doesn't capture anything beyond the session typescript.\n>>> However, this doesn't look that hard to reproduce locally ... have you\n>>> tried, using similar configure options to what that cfbot run did?\n>>> Once you did reproduce it, there'd be logs under\n>>> contrib/test_decoding/tmp_check/.\n>> yeah. The cfbot runs check-world which makes it difficult for it to know\n>> which log files to show when there's an error. That's a major part of\n>> the reason the buildfarm runs a much finer grained set of steps.\n> Yeah, it's hard to make it print out just the right logs without\n> dumping so much stuff that it's hard to see the wood for the trees;\n> perhaps if the Makefile had an option to dump relevant stuff for the\n> specific tests that failed, or perhaps the buildfarm is already better\n> at that and cfbot should just use the buildfarm client directly. Hmm.\n> Another idea would be to figure out how to make a tarball of all log\n> files that you can download for inspection with better tools at home\n> when things go wrong. It would rapidly blow through the 1GB limit for\n> stored \"artefacts\" on open source/community Cirrus accounts though, so\n> we'd need to figure out how to manage retention carefully.\n\n\nI did some thinking about this. How about if we have the make files and\nthe msvc build system create a well known file with the location(s) to\nsearch for log files if there's an error. Each bit of testing could\noverwrite this file before starting testing, and then tools like cfbot\nwould know where to look for files to report? To keep things clean for\nother users the file would only be created if, say,\nPG_NEED_ERROR_LOG_LOCATIONS is set. The well known location would be\nsomething like \"$(top_builddir)/error_log_locations.txt\", and individual\nMakefiles would have entries something like:,\n\n\n override ERROR_LOG_LOCATIONS =\n $(top_builddir)/contrib/test_decoding/tmp_check/log\n\n\nIf this seems like a good idea I can go and try to make that happen.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 18 Feb 2021 11:01:03 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On 2/18/21 11:01 AM, Andrew Dunstan wrote:\n> On 2/17/21 3:42 PM, Thomas Munro wrote:\n>> On Thu, Feb 18, 2021 at 9:18 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> On 2/17/21 11:06 AM, Tom Lane wrote:\n>>>> Peter Smith <smithpb2250@gmail.com> writes:\n>>>>> I saw that one of our commitfest entries (32/2914) is recently\n>>>>> reporting a fail on the cfbot site [1]. I thought this was all ok a\n>>>>> few days ago.\n>>>>> ...\n>>>>> Is there any other detailed information available anywhere, e.g.\n>>>>> logs?, which might help us work out what was the cause of the test\n>>>>> failure?\n>>>> AFAIK the cfbot doesn't capture anything beyond the session typescript.\n>>>> However, this doesn't look that hard to reproduce locally ... have you\n>>>> tried, using similar configure options to what that cfbot run did?\n>>>> Once you did reproduce it, there'd be logs under\n>>>> contrib/test_decoding/tmp_check/.\n>>> yeah. The cfbot runs check-world which makes it difficult for it to know\n>>> which log files to show when there's an error. That's a major part of\n>>> the reason the buildfarm runs a much finer grained set of steps.\n>> Yeah, it's hard to make it print out just the right logs without\n>> dumping so much stuff that it's hard to see the wood for the trees;\n>> perhaps if the Makefile had an option to dump relevant stuff for the\n>> specific tests that failed, or perhaps the buildfarm is already better\n>> at that and cfbot should just use the buildfarm client directly. Hmm.\n>> Another idea would be to figure out how to make a tarball of all log\n>> files that you can download for inspection with better tools at home\n>> when things go wrong. It would rapidly blow through the 1GB limit for\n>> stored \"artefacts\" on open source/community Cirrus accounts though, so\n>> we'd need to figure out how to manage retention carefully.\n>\n> I did some thinking about this. How about if we have the make files and\n> the msvc build system create a well known file with the location(s) to\n> search for log files if there's an error. Each bit of testing could\n> overwrite this file before starting testing, and then tools like cfbot\n> would know where to look for files to report? To keep things clean for\n> other users the file would only be created if, say,\n> PG_NEED_ERROR_LOG_LOCATIONS is set. The well known location would be\n> something like \"$(top_builddir)/error_log_locations.txt\", and individual\n> Makefiles would have entries something like:,\n>\n>\n> override ERROR_LOG_LOCATIONS =\n> $(top_builddir)/contrib/test_decoding/tmp_check/log\n>\n>\n> If this seems like a good idea I can go and try to make that happen.\n>\n>\n\nhere's a very small and simple (and possibly naive)  POC patch that\ndemonstrates this and seems to do the right thing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 19 Feb 2021 09:54:34 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "Here is another related question about the cfbot error reporting -\n\nThe main cfbot \"status page\" [1] still shows a couple of fails for the\n32/2914 (for freebsd & linux). But looking more closely, those fails\nare not from the latest run. e.g. I also found this execution\n\"history\" page [2] for our patch which shows the most recent run was\nok for commit a7e715.\n\n~~\n\nSo it seems like there is some kind of rule that says the main status\npage will still indicate \"recent* errors (even if the latest execution\nwas ok)...\n\nIIUC that explains the difference between a hollow red 'X' (old fail)\nand a solid red 'X' fail (new fail)? And I am guessing if our patch\ncontinues to work ok (for how long?) then that hollow red 'X' may\nmorph into a solid green 'tick' (when the old fail becomes too old to\ncare about anymore)?\n\nBut those are just my guesses based on those icon tooltips. What\n*really* are the rules for those main page status indicators, and how\nlong do the old failure icons linger around before changing to success\nicons? (Apologies if a legend for those icons is already described\nsomewhere - I didn't find it).\n\nThanks!\n\n------\n[1] http://cfbot.cputube.org/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/32/2914\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sat, 20 Feb 2021 08:31:02 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On Sat, Feb 20, 2021 at 10:31 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> Here is another related question about the cfbot error reporting -\n>\n> The main cfbot \"status page\" [1] still shows a couple of fails for the\n> 32/2914 (for freebsd & linux). But looking more closely, those fails\n> are not from the latest run. e.g. I also found this execution\n> \"history\" page [2] for our patch which shows the most recent run was\n> ok for commit a7e715.\n>\n> ~~\n>\n> So it seems like there is some kind of rule that says the main status\n> page will still indicate \"recent* errors (even if the latest execution\n> was ok)...\n\nHmmph. It seems like there is indeed some kind of occasional glitch,\npossible a bug in my code for picking up statuses from Cirrus through\ntheir GraphQL API (that's a recent thing; we had to change providers\ndue to another CI's policy changes in January, and apparently\nsomething in the new pipeline isn't quite fully baked). Will look\ninto that this weekend. Sorry about that, and thanks for letting me\nknow.\n\n> IIUC that explains the difference between a hollow red 'X' (old fail)\n> and a solid red 'X' fail (new fail)? And I am guessing if our patch\n> continues to work ok (for how long?) then that hollow red 'X' may\n> morph into a solid green 'tick' (when the old fail becomes too old to\n> care about anymore)?\n>\n> But those are just my guesses based on those icon tooltips. What\n> *really* are the rules for those main page status indicators, and how\n> long do the old failure icons linger around before changing to success\n> icons? (Apologies if a legend for those icons is already described\n> somewhere - I didn't find it).\n\nYeah, I will try to clarify the UI a bit...\n\n\n", "msg_date": "Sat, 20 Feb 2021 11:33:21 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On Sat, Feb 20, 2021 at 11:33 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Feb 20, 2021 at 10:31 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Here is another related question about the cfbot error reporting -\n> >\n> > The main cfbot \"status page\" [1] still shows a couple of fails for the\n> > 32/2914 (for freebsd & linux). But looking more closely, those fails\n> > are not from the latest run. e.g. I also found this execution\n> > \"history\" page [2] for our patch which shows the most recent run was\n> > ok for commit a7e715.\n\n> Hmmph. It seems like there is indeed some kind of occasional glitch,\n> possible a bug in my code for picking up statuses from Cirrus through\n> their GraphQL API (that's a recent thing; we had to change providers\n> due to another CI's policy changes in January, and apparently\n> something in the new pipeline isn't quite fully baked). Will look\n> into that this weekend. Sorry about that, and thanks for letting me\n> know.\n\nShould be fixed now. (Embarrassing detail: I used flock(LOCK_EX) to\nprevent two copies of a cron job from running at the same time, but\nPython helpfully garbage collected and closed the fd, releasing the\nlock, Rarely, the script would run for long enough to have some state\nclobbered by the next run, so you wouldn't see the latest result.)\n\n\n", "msg_date": "Mon, 22 Feb 2021 12:19:41 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "On Sat, Feb 20, 2021 at 3:54 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> here's a very small and simple (and possibly naive) POC patch that\n> demonstrates this and seems to do the right thing.\n\nAs a small variation that might be more parallelism-friendly, would it\nbe better to touch a file with a known name in any subdirectory that\ncontains potentially interesting logs, and rm it when a test succeeds?\n\n\n", "msg_date": "Mon, 22 Feb 2021 15:28:27 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "Hi,\n\nOn 2021-02-17 15:18:02 -0500, Andrew Dunstan wrote:\n> yeah. The cfbot runs check-world which makes it difficult for it to know\n> which log files to show when there's an error. That's a major part of\n> the reason the buildfarm runs a much finer grained set of steps.\n\nI really think we need a better solution for this across the different\nuse-cases of running tests. For development parallel check-world is\nimportant for a decent hack-test loop. But I waste a fair bit of time to\nscroll back to find the original source of failures. And on the\nbuildfarm we waste a significant amount of time by limiting parallelism\ndue to the non-parallel sequence of finer grained steps.\n\nAnd it's not just about logs - even just easily seeing the first\nreported test failure without needing to search through large amounts of\ntext would be great.\n\nWith, um, more modern buildtools (e.g. ninja) you'll at least get the\nlast failure displayed at the end, instead of seing a lot of other\nthings after it like with make.\n\n\nMy suspicion is that, given the need to have this work for both msvc and\nmake, writing an in-core test-runner script is the only real option to\nimprove upon the current situation.\n\nFor make it'd not be hard to add a recursive 'listchecks' target listing\nthe individual tests that need to be run. Hacking up vcregress.pl to do\nthat, instead of what it currently does, shouldn't be too hard either.\n\n\nOnce there's a list of commands that need to be run it's not hard to\nwrite a loop in perl that runs up to N tests in parallel, saving their\noutput. Which then allows to display the failing test reports at the\nend.\n\n\nIf we then also add a convention that each test outputs something like\nTESTLOG: path/to/logfile\n...\nit'd not be hard to add support for the test runner to list the files\nthat cfbot et al should output.\n\n\nLooking around the tree, the most annoying bit to implement something\nlike this is that things below src/bin/, src/interfaces, src/test,\nsrc/pl implement their own check, installcheck targets. Given the number\nof these that just boil down to a variant of\n\ncheck:\n\t$(pg_regress_check)\n $(prove_check)\ninstallcheck:\n\t$(pg_regress_installcheck)\n\nit seems we should lift the REGRESS and TAP_TESTS specific logic in\npgxs.mk up into src/Makefiles.global. Which then would make something\nlist a global listchecks target easy.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 19:34:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "\nOn 2/21/21 9:28 PM, Thomas Munro wrote:\n> On Sat, Feb 20, 2021 at 3:54 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> here's a very small and simple (and possibly naive) POC patch that\n>> demonstrates this and seems to do the right thing.\n> As a small variation that might be more parallelism-friendly, would it\n> be better to touch a file with a known name in any subdirectory that\n> contains potentially interesting logs, and rm it when a test succeeds?\n\n\n\nYes, that sounds better, Thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 22 Feb 2021 09:17:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" }, { "msg_contents": "\nOn 2/21/21 10:34 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-02-17 15:18:02 -0500, Andrew Dunstan wrote:\n>> yeah. The cfbot runs check-world which makes it difficult for it to know\n>> which log files to show when there's an error. That's a major part of\n>> the reason the buildfarm runs a much finer grained set of steps.\n> I really think we need a better solution for this across the different\n> use-cases of running tests. For development parallel check-world is\n> important for a decent hack-test loop. But I waste a fair bit of time to\n> scroll back to find the original source of failures. And on the\n> buildfarm we waste a significant amount of time by limiting parallelism\n> due to the non-parallel sequence of finer grained steps.\n\n\nMaybe but running fast isn't really a design goal of the buildfarm. It's\nmeant to be automated so it doesn't matter if it takes 10 or 20 or 60\nminutes.\n\n\nAnother reason for using fine grained tasks is to be able to\ninclude/exclude them as needed. See the buildfarm's\nexclude-steps/only-steps parameters.\n\n\nThat said there is some provision for parallelism, in that multiple\nbranches and multiple members can be tested at the same time, and\nrun_branches.pl will manage that fairly nicely for you. See\n<https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto#Running_in_Parallel>\nfor details\n\n\n> And it's not just about logs - even just easily seeing the first\n> reported test failure without needing to search through large amounts of\n> text would be great.\n>\n> With, um, more modern buildtools (e.g. ninja) you'll at least get the\n> last failure displayed at the end, instead of seing a lot of other\n> things after it like with make.\n>\n>\n> My suspicion is that, given the need to have this work for both msvc and\n> make, writing an in-core test-runner script is the only real option to\n> improve upon the current situation.\n\n\n\nOk ... be prepared for a non-trivial maintenance cost, however, which\nwill be born by those of us fluent in perl, the only realistic\npossibility unless we want to add to build dependencies. That's far from\neveryone.\n\n\nPart of the problem that this isn't going to solve is the sheer volume\nthat some tests produce. For example, the pg_dump tests produce about\n40k lines  / 5Mb of log.\n\n\n\n> For make it'd not be hard to add a recursive 'listchecks' target listing\n> the individual tests that need to be run. Hacking up vcregress.pl to do\n> that, instead of what it currently does, shouldn't be too hard either.\n>\n>\n> Once there's a list of commands that need to be run it's not hard to\n> write a loop in perl that runs up to N tests in parallel, saving their\n> output. Which then allows to display the failing test reports at the\n> end.\n>\n>\n> If we then also add a convention that each test outputs something like\n> TESTLOG: path/to/logfile\n> ...\n> it'd not be hard to add support for the test runner to list the files\n> that cfbot et al should output.\n\n\nYeah, there is code in the buildfarm that contains a lot of building\nblocks that can be used for this sort of stuff, see the PGBuild::Utils\nand PGBuild::Log modules.\n\n\n> Looking around the tree, the most annoying bit to implement something\n> like this is that things below src/bin/, src/interfaces, src/test,\n> src/pl implement their own check, installcheck targets. Given the number\n> of these that just boil down to a variant of\n>\n> check:\n> \t$(pg_regress_check)\n> $(prove_check)\n> installcheck:\n> \t$(pg_regress_installcheck)\n>\n> it seems we should lift the REGRESS and TAP_TESTS specific logic in\n> pgxs.mk up into src/Makefiles.global. Which then would make something\n> list a global listchecks target easy.\n>\n\nYeah, some of this stuff has grown a bit haphazardly, and maybe needs\nsome rework.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 22 Feb 2021 09:46:50 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Finding cause of test fails on the cfbot site" } ]
[ { "msg_contents": "While reviewing the NSS patch [1], I noticed that the cryptohash\r\nimplementation for OpenSSL doesn't set up any locking callbacks in\r\nfrontend code. I think there has to be a call to\r\nOPENSSL_set_locking_callback() before libpq starts reaching into the\r\nEVP_* API, if ENABLE_THREAD_SAFETY and HAVE_CRYPTO_LOCK are both true.\r\n\r\nThis would only affect threaded libpq clients running OpenSSL 1.0.2 and\r\nbelow, and it looks like the most likely code path to be affected is\r\nthe OpenSSL error stack. So if anything went wrong with one of those\r\nhash calls, it's possible that libpq would crash (or worse, silently\r\nmisbehave somewhere in the TLS stack) instead of gracefully reporting\r\nan error. [2] is an example of this in the wild.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/40095f48c3c6d556293cb0ecf80ea10cdf7d26b3.camel%40vmware.com\r\n[2] https://github.com/openssl/openssl/issues/4690\r\n", "msg_date": "Wed, 17 Feb 2021 18:34:36 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Wed, Feb 17, 2021 at 06:34:36PM +0000, Jacob Champion wrote:\n> This would only affect threaded libpq clients running OpenSSL 1.0.2 and\n> below, and it looks like the most likely code path to be affected is\n> the OpenSSL error stack. So if anything went wrong with one of those\n> hash calls, it's possible that libpq would crash (or worse, silently\n> misbehave somewhere in the TLS stack) instead of gracefully reporting\n> an error. [2] is an example of this in the wild.\n\nI have been discussing a bit this issue with Jacob, and that's a\nproblem we would need to address on HEAD. First, I have been looking\nat this stuff in older versions with MD5 and SHA256 used by SCRAM when\nit comes to ~13 with libpq:\n- MD5 is based on the internal implementation of Postgres even when\nbuilding libpq with OpenSSL, so that would not be an issue.\n- SHA256 is a different story though, because when building with\nOpenSSL we would go through SHA256_{Init,Update,Final} for SCRAM\nauthentication. In the context of a SSL connection, the crypto part\nis initialized. But that would *not* be the case of a connection in a\nnon-SSL context. Fortunately, and after looking at the OpenSSL code\n(fips_md_init_ctx, HASH_UPDATE, etc.), there is no sign of locking\nhandling or errors, so I think that we are safe there.\n\nNow comes the case of HEAD that uses EVP for MD5 and SHA256. A SSL\nconnection would initialize the crypto part, but that does not happen\nfor a non-SSL connection. So, logically, one could run into issues if\nusing MD5 or SCRAM with OpenSSL <= 1.0.2 (pgbench with a high number\nof threads does not complain by the way), and we are not yet in a\nstage where we should drop this version either, even if it has been\nEOL'd by upstream at the end of 2019.\n\nWe have the code in place to properly initialize the crypto locking in\nlibpq with ENABLE_THREAD_SAFETY, but the root of the issue is that the\nSSL and crypto initializations are grouped together. What we need to\ndo here is to split those phases of the initialization so as non-SSL\nconnections can use the crypto part properly, as pqsecure_initialize\ngets only called now when libpq negotiates SSL with the postmaster.\nIt seems to me that we should make sure of a proper reset of the\ncrypto part within pqDropConnection(), while the initialization needs\nto happen in PQconnectPoll().\n--\nMichael", "msg_date": "Thu, 18 Feb 2021 11:04:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Thu, Feb 18, 2021 at 11:04:05AM +0900, Michael Paquier wrote:\n> We have the code in place to properly initialize the crypto locking in\n> libpq with ENABLE_THREAD_SAFETY, but the root of the issue is that the\n> SSL and crypto initializations are grouped together. What we need to\n> do here is to split those phases of the initialization so as non-SSL\n> connections can use the crypto part properly, as pqsecure_initialize\n> gets only called now when libpq negotiates SSL with the postmaster.\n> It seems to me that we should make sure of a proper reset of the\n> crypto part within pqDropConnection(), while the initialization needs\n> to happen in PQconnectPoll().\n\nSo, I have tried a couple of things with a debug build of OpenSSL\n1.0.2 at hand (two locks for the crypto and SSL initializations but\nSSL_new() grabs some random bytes that need the same callback to be\nset or the state of the threads is messed up, some global states to\ncontrol the load), and the simplest solution I have come up with is to\ncontrol in each pg_conn if the crypto callbacks have been initialized\nor not so as we avoid multiple inits and/or drops of the state for a\nsingle connection. I have arrived at this conclusion after hunting\ndown cases with pqDropConnection() which would could be called\nmultiple times, particularly if there are connection attempts to\nmultiple hosts.\n\nThe attached patch implements things this way, and initializes the\ncrypto callbacks before sending the startup packet, before deciding if\nSSL needs to be requested or not. I have played with several\nthreading scenarios with this patch, with and without OpenSSL, and the\nnumbers match in terms of callback loading and unloading (the global\ncounter used in fe-secure-openssl.c gets to zero). \n--\nMichael", "msg_date": "Fri, 19 Feb 2021 14:37:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Fri, Feb 19, 2021 at 02:37:06PM +0900, Michael Paquier wrote:\n> The attached patch implements things this way, and initializes the\n> crypto callbacks before sending the startup packet, before deciding if\n> SSL needs to be requested or not. I have played with several\n> threading scenarios with this patch, with and without OpenSSL, and the\n> numbers match in terms of callback loading and unloading (the global\n> counter used in fe-secure-openssl.c gets to zero). \n\nI have done more work and much more tests with this patch, polishing\nthings as of the attached v2. First, I don't see any performance\nimpact or concurrency issues, using up to 200 threads with pgbench -C\n-n -j N -c N -f blah.sql where the SQL file includes a single\nmeta-command like that for instance:\n\\set a 1\n\nThis ensures that connection requests happen a maximum in concurrency,\nand libpq stays close to the maximum for the number of open threads.\nAttached is a second, simple program that I have used to stress the\ncase of threads using both SSL and non-SSL connections in parallel to\ncheck for the consistency of the callbacks and their release, mainly\nacross MD5 and SCRAM.\n\nExtra eyes are welcome here, though I feel comfortable with the\napproach taken here.\n--\nMichael", "msg_date": "Wed, 3 Mar 2021 15:30:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Wed, 2021-03-03 at 15:30 +0900, Michael Paquier wrote:\r\n> Extra eyes are welcome here, though I feel comfortable with the\r\n> approach taken here.\r\n\r\nI have one suggestion for the new logic:\r\n\r\n> else\r\n> {\r\n> /*\r\n> * In the non-SSL case, just remove the crypto callbacks. This code\r\n> * path has no dependency on any pending SSL calls.\r\n> */\r\n> destroy_needed = true;\r\n> }\r\n> [...]\r\n> if (destroy_needed && conn->crypto_loaded)\r\n> {\r\n> destroy_ssl_system();\r\n> conn->crypto_loaded = false;\r\n> }\r\n\r\nI had to convince myself that this logic is correct -- we set\r\ndestroy_needed even if crypto is not enabled, but then check later to\r\nmake sure that crypto_loaded is true before doing anything. What would\r\nyou think about moving the conn->crypto_loaded check to the else\r\nbranch, so that destroy_needed is only set if we actually need it?\r\n\r\nEither way, the patch looks good to me and behaves nicely in testing.\r\nThanks!\r\n\r\n--Jacob\r\n", "msg_date": "Mon, 8 Mar 2021 18:06:32 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Mon, Mar 08, 2021 at 06:06:32PM +0000, Jacob Champion wrote:\n> I had to convince myself that this logic is correct -- we set\n> destroy_needed even if crypto is not enabled, but then check later to\n> make sure that crypto_loaded is true before doing anything. What would\n> you think about moving the conn->crypto_loaded check to the else\n> branch, so that destroy_needed is only set if we actually need it?\n\nDo you mean something like the attached? If I recall my mood from the\nmoment, I think that I did that to be more careful with the case where\nthe client has its own set of callbacks set (pq_init_crypto_lib as\nfalse) but that does not matter as this is double-checked in\ndestroy_ssl_system(). I have adjusted some comments after more\nreview.\n--\nMichael", "msg_date": "Wed, 10 Mar 2021 17:21:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Wed, 2021-03-10 at 17:21 +0900, Michael Paquier wrote:\r\n> Do you mean something like the attached?\r\n\r\nYes! Patch LGTM.\r\n\r\n--Jacob\r\n", "msg_date": "Wed, 10 Mar 2021 17:05:38 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": true, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" }, { "msg_contents": "On Wed, Mar 10, 2021 at 05:05:38PM +0000, Jacob Champion wrote:\n> On Wed, 2021-03-10 at 17:21 +0900, Michael Paquier wrote:\n> > Do you mean something like the attached?\n> \n> Yes! Patch LGTM.\n\nThanks Jacob for double-checking. I have looked at that again slowly\ntoday, and applied it after some light adjustments in the comments.\n--\nMichael", "msg_date": "Thu, 11 Mar 2021 17:18:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: cryptohash: missing locking functions for OpenSSL <= 1.0.2?" } ]
[ { "msg_contents": "Hi hackers,\r\n\r\nI noticed some interesting role behavior that seems to be either a bug\r\nor a miss in the documentation. The documentation for SET ROLE claims\r\nthat RESET ROLE resets \"the current user identifier to be the current\r\nsession user identifier\" [0], but this doesn't seem to hold true when\r\n\"role\" has been set via pg_db_role_setting. Here is an example:\r\n\r\nsetup:\r\n postgres=# CREATE ROLE test2;\r\n CREATE ROLE\r\n postgres=# CREATE ROLE test1 LOGIN CREATEROLE IN ROLE test2;\r\n CREATE ROLE\r\n postgres=# ALTER ROLE test1 SET ROLE test2;\r\n ALTER ROLE\r\n\r\nafter logging in as test1:\r\n postgres=> SELECT SESSION_USER, CURRENT_USER;\r\n session_user | current_user\r\n --------------+--------------\r\n test1 | test2\r\n (1 row)\r\n\r\n postgres=> RESET ROLE;\r\n RESET\r\n postgres=> SELECT SESSION_USER, CURRENT_USER;\r\n session_user | current_user\r\n --------------+--------------\r\n test1 | test2\r\n (1 row)\r\n\r\nI believe this behavior is caused by the \"role\" getting set at\r\nPGC_S_GLOBAL, which sets the default used by RESET [1]. IMO this just\r\nrequires a small documentation fix. Here is my first attempt:\r\n\r\ndiff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\r\nindex 739f2c5cdf..a69bfeae24 100644\r\n--- a/doc/src/sgml/ref/set_role.sgml\r\n+++ b/doc/src/sgml/ref/set_role.sgml\r\n@@ -54,7 +54,12 @@ RESET ROLE\r\n\r\n <para>\r\n The <literal>NONE</literal> and <literal>RESET</literal> forms reset the current\r\n- user identifier to be the current session user identifier.\r\n+ user identifier to the default value. The default value is whatever value it\r\n+ would be if no <command>SET</command> had been executed in the current\r\n+ session. This can be the command-line option value, the per-database default\r\n+ setting, or the per-user default setting for the role, if any such settings\r\n+ exist. Otherwise, the default value will be the current session user\r\n+ identifier.\r\n These forms can be executed by any user.\r\n </para>\r\n </refsect1>\r\n\r\nNathan\r\n\r\n[0] https://www.postgresql.org/docs/devel/sql-set-role.html\r\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/include/utils/guc.h;h=5004ee41;hb=HEAD#l79\r\n\r\n", "msg_date": "Wed, 17 Feb 2021 18:56:24 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "documentation fix for SET ROLE" }, { "msg_contents": "On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com>\nwrote:\n\n>\n> postgres=# ALTER ROLE test1 SET ROLE test2;\n> ALTER ROLE\n>\n\nI would not have expected this to work - “role” isn’t a\nconfiguration_parameter. Its actually cool that it does, but this doc fix\nshould address this oversight as well.\n\nDavid J.\n\nOn Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com> wrote:\n    postgres=# ALTER ROLE test1 SET ROLE test2;\n    ALTER ROLE\nI would not have expected this to work - “role” isn’t a configuration_parameter.  Its actually cool that it does, but this doc fix should address this oversight as well.David J.", "msg_date": "Wed, 17 Feb 2021 12:12:47 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 2/17/21 2:12 PM, David G. Johnston wrote:\n> On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\n> <mailto:bossartn@amazon.com>> wrote:\n> \n> \n>     postgres=# ALTER ROLE test1 SET ROLE test2;\n>     ALTER ROLE\n> \n> \n> I would not have expected this to work - “role” isn’t a\n> configuration_parameter.  Its actually cool that it does, but this doc fix\n> should address this oversight as well.\n\n\nI was surprised this worked too.\n\nBut the behavior is consistent with other GUCs. In other words, when you \"ALTER\nROLE ... SET ...\" you change the default value for the session, and therefore a\nRESET just changes to that value.\n\n-- login as postgres\nnmx=# show work_mem;\n work_mem\n----------\n 200MB\n(1 row)\n\nnmx=# set work_mem = '42MB';\nSET\nnmx=# show work_mem;\n work_mem\n----------\n 42MB\n(1 row)\n\nnmx=# reset work_mem;\nRESET\nnmx=# show work_mem;\n work_mem\n----------\n 200MB\n(1 row)\n\nALTER ROLE test1 SET work_mem = '42MB';\n\n-- login as test1\nnmx=> show work_mem;\n work_mem\n----------\n 42MB\n(1 row)\n\nnmx=> reset work_mem;\nRESET\nnmx=> show work_mem;\n work_mem\n----------\n 42MB\n(1 row)\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Wed, 17 Feb 2021 15:14:07 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 2/17/21, 12:15 PM, \"Joe Conway\" <mail@joeconway.com> wrote:\r\n> On 2/17/21 2:12 PM, David G. Johnston wrote:\r\n>> On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\r\n>> <mailto:bossartn@amazon.com>> wrote:\r\n>> \r\n>> \r\n>> postgres=# ALTER ROLE test1 SET ROLE test2;\r\n>> ALTER ROLE\r\n>> \r\n>> \r\n>> I would not have expected this to work - “role” isn’t a\r\n>> configuration_parameter. Its actually cool that it does, but this doc fix\r\n>> should address this oversight as well.\r\n>\r\n>\r\n> I was surprised this worked too.\r\n>\r\n> But the behavior is consistent with other GUCs. In other words, when you \"ALTER\r\n> ROLE ... SET ...\" you change the default value for the session, and therefore a\r\n> RESET just changes to that value.\r\n\r\nLooking further, I noticed that session_authorization does not work\r\nthe same way. AFAICT this is because it's set via SetConfigOption()\r\nin InitializeSessionUserId(). If you initialize role here, it acts\r\nthe same as session_authorization.\r\n\r\ndiff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c\r\nindex 0f67b99cc5..a201bb3766 100644\r\n--- a/src/backend/utils/init/miscinit.c\r\n+++ b/src/backend/utils/init/miscinit.c\r\n@@ -761,6 +761,7 @@ InitializeSessionUserId(const char *rolename, Oid roleid)\r\n }\r\n\r\n /* Record username and superuser status as GUC settings too */\r\n+ SetConfigOption(\"role\", rname, PGC_BACKEND, PGC_S_OVERRIDE);\r\n SetConfigOption(\"session_authorization\", rname,\r\n PGC_BACKEND, PGC_S_OVERRIDE);\r\n SetConfigOption(\"is_superuser\",\r\n\r\nNathan\r\n\r\n", "msg_date": "Wed, 17 Feb 2021 20:30:46 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 2/17/21 2:12 PM, David G. Johnston wrote:\r\n> On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\r\n> <mailto:bossartn@amazon.com>> wrote:\r\n>\r\n>\r\n> postgres=# ALTER ROLE test1 SET ROLE test2;\r\n> ALTER ROLE\r\n>\r\n>\r\n> I would not have expected this to work - “role” isn’t a\r\n> configuration_parameter. Its actually cool that it does, but this doc fix\r\n> should address this oversight as well.\r\n\r\nHere's a patch that adds \"role\" and \"session authorization\" as\r\nconfiguration parameters, too.\r\n\r\nNathan", "msg_date": "Fri, 19 Feb 2021 01:18:30 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Thu, Feb 18, 2021 at 6:18 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 2/17/21 2:12 PM, David G. Johnston wrote:\n> > On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\n> > <mailto:bossartn@amazon.com>> wrote:\n> >\n> >\n> > postgres=# ALTER ROLE test1 SET ROLE test2;\n> > ALTER ROLE\n> >\n> >\n> > I would not have expected this to work - “role” isn’t a\n> > configuration_parameter. Its actually cool that it does, but this doc\n> fix\n> > should address this oversight as well.\n>\n> Here's a patch that adds \"role\" and \"session authorization\" as\n> configuration parameters, too.\n>\n>\nYou will want to add this to the commitfest if you haven't already.\n\nI would suggest adding a section titled \"Identification\" and placing these\nunder that.\n\nReading it over it looks good. One point though: SET and SET ROLE are\nindeed \"at run-time\" (not 'run time'). ALTER ROLE and ALTER DATABASE\nshould be considered \"at connection-time\" just like the command-line\noptions.\n\nDavid J.\n\nOn Thu, Feb 18, 2021 at 6:18 PM Bossart, Nathan <bossartn@amazon.com> wrote:On 2/17/21 2:12 PM, David G. Johnston wrote:\n> On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\n> <mailto:bossartn@amazon.com>> wrote:\n>\n>\n>         postgres=# ALTER ROLE test1 SET ROLE test2;\n>         ALTER ROLE\n>\n>\n> I would not have expected this to work - “role” isn’t a\n> configuration_parameter.  Its actually cool that it does, but this doc fix\n> should address this oversight as well.\n\nHere's a patch that adds \"role\" and \"session authorization\" as\nconfiguration parameters, too.You will want to add this to the commitfest if you haven't already.I would suggest adding a section titled \"Identification\" and placing these under that.Reading it over it looks good.  One point though: SET and SET ROLE are indeed \"at run-time\" (not 'run time').  ALTER ROLE and ALTER DATABASE should be considered \"at connection-time\" just like the command-line options.David J.", "msg_date": "Mon, 8 Mar 2021 16:41:29 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Mon, Mar 8, 2021 at 4:41 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thu, Feb 18, 2021 at 6:18 PM Bossart, Nathan <bossartn@amazon.com>\n> wrote:\n>\n>> On 2/17/21 2:12 PM, David G. Johnston wrote:\n>> > On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\n>> > <mailto:bossartn@amazon.com>> wrote:\n>> >\n>> >\n>> > postgres=# ALTER ROLE test1 SET ROLE test2;\n>> > ALTER ROLE\n>> >\n>> >\n>> > I would not have expected this to work - “role” isn’t a\n>> > configuration_parameter. Its actually cool that it does, but this doc\n>> fix\n>> > should address this oversight as well.\n>>\n>> Here's a patch that adds \"role\" and \"session authorization\" as\n>> configuration parameters, too.\n>>\n>>\n> You will want to add this to the commitfest if you haven't already.\n>\n> I would suggest adding a section titled \"Identification\" and placing these\n> under that.\n>\n> Reading it over it looks good. One point though: SET and SET ROLE are\n> indeed \"at run-time\" (not 'run time'). ALTER ROLE and ALTER DATABASE\n> should be considered \"at connection-time\" just like the command-line\n> options.\n>\n>\nAlso, as a nearby email just reminded me, the determination of which role\nname is used to figure out default settings is the presented user name, not\nthe one that would result from a connection-time role change as described\nhere - though this should be tested, and then documented.\n\nDavid J.\n\nOn Mon, Mar 8, 2021 at 4:41 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Feb 18, 2021 at 6:18 PM Bossart, Nathan <bossartn@amazon.com> wrote:On 2/17/21 2:12 PM, David G. Johnston wrote:\n> On Wednesday, February 17, 2021, Bossart, Nathan <bossartn@amazon.com\n> <mailto:bossartn@amazon.com>> wrote:\n>\n>\n>         postgres=# ALTER ROLE test1 SET ROLE test2;\n>         ALTER ROLE\n>\n>\n> I would not have expected this to work - “role” isn’t a\n> configuration_parameter.  Its actually cool that it does, but this doc fix\n> should address this oversight as well.\n\nHere's a patch that adds \"role\" and \"session authorization\" as\nconfiguration parameters, too.You will want to add this to the commitfest if you haven't already.I would suggest adding a section titled \"Identification\" and placing these under that.Reading it over it looks good.  One point though: SET and SET ROLE are indeed \"at run-time\" (not 'run time').  ALTER ROLE and ALTER DATABASE should be considered \"at connection-time\" just like the command-line options.Also, as a nearby email just reminded me, the determination of which role name is used to figure out default settings is the presented user name, not the one that would result from a connection-time role change as described here - though this should be tested, and then documented.David J.", "msg_date": "Mon, 8 Mar 2021 16:48:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "Thanks for reviewing.\r\n\r\nOn 3/8/21 3:49 PM, David G. Johnston wrote:\r\n> On Mon, Mar 8, 2021 at 4:41 PM David G. Johnston <david.g.johnston@gmail.com> wrote:\r\n>> You will want to add this to the commitfest if you haven't already.\r\n\r\nHere is the commitfest entry: https://commitfest.postgresql.org/32/2993/\r\n\r\n>> I would suggest adding a section titled \"Identification\" and placing these under that.\r\n\r\nGood idea.\r\n\r\n>> Reading it over it looks good. One point though: SET and SET ROLE are indeed \"at run-time\" (not 'run time'). ALTER ROLE and ALTER DATABASE should be considered \"at connection-time\" just like the command-line options.\r\n\r\nMakes sense. I've applied these changes.\r\n\r\n> Also, as a nearby email just reminded me, the determination of which role name is used to figure out default settings is the presented user name, not the one that would result from a connection-time role change as described here - though this should be tested, and then documented.\r\n\r\nYes, this seems to be correct. I've added a note about this behavior\r\nin the patch.\r\n\r\n -- setup\r\n CREATE ROLE test1;\r\n CREATE ROLE test2 WITH LOGIN IN ROLE test1;\r\n ALTER ROLE test1 SET client_min_messages = 'error';\r\n ALTER ROLE test2 SET client_min_messages = 'warning';\r\n ALTER ROLE test2 SET ROLE = 'test1';\r\n\r\n -- as test2\r\n postgres=> SELECT CURRENT_USER, setting FROM pg_settings WHERE name = 'client_min_messages';\r\n current_user | setting\r\n --------------+---------\r\n test1 | warning\r\n (1 row)\r\n\r\nNathan", "msg_date": "Wed, 10 Mar 2021 01:27:30 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "I have had a look at the patch, and while I agree that this should\nbe documented, I am not happy with the patch as it is.\n\nI think we should *not* document that under \"server configuration\".\nThis is confusing and will lead people to think that a role is\na configuration parameter. But you cannot add\n\n role = myrole\n\nto \"postgresql.conf\". A role is not a GUC.\n\nI think that the place to document this is\ndoc/src/sgml/ref/alter_role.sgml.\n\nThe second hunk of the patch is in the right place:\n\n--- a/doc/src/sgml/ref/set_role.sgml\n+++ b/doc/src/sgml/ref/set_role.sgml\n@@ -53,9 +53,13 @@ RESET ROLE\n </para>\n \n <para>\n- The <literal>NONE</literal> and <literal>RESET</literal> forms reset the current\n- user identifier to be the current session user identifier.\n- These forms can be executed by any user.\n+ The <literal>NONE</literal> form resets the current user identifier to the\n+ current session user identifier. The <literal>RESET</literal> form resets\n+ the current user identifier to the default value. The default value can be\n+ the command-line option value, the per-database default setting, or the\n+ per-user default setting for the originally authenticated session user, if\n+ any such settings exist. Otherwise, the default value will be the current\n+ session user identifier. These forms can be executed by any user.\n </para>\n </refsect1>\n\nPerhaps this could be reworded in a simpler fashion, like:\n\n<literal>SET ROLE NONE</literal> sets the user identifier to the current\nsession identifier, as returned by the <function>session_user</function>\nfunction. <literal>RESET ROLE</literal> sets the user identifier to the\nvalue it had after you connected to the database. This can be different\nfrom the session identifier if <literal>ALTER DATABASE</literal> or\n<literal>ALTER ROLE</literal> were used to assign a different default role.\n\n(I hope what I wrote is correct.)\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 11 Mar 2021 15:57:58 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Thu, Mar 11, 2021 at 7:58 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> I think we should *not* document that under \"server configuration\".\n> This is confusing and will lead people to think that a role is\n> a configuration parameter. But you cannot add\n>\n> role = myrole\n>\n> to \"postgresql.conf\". A role is not a GUC.\n>\n> I think that the place to document this is\n> doc/src/sgml/ref/alter_role.sgml.\n>\n\nGood point. I agree that another syntax specification should be added to\nALTER ROLE/DATABASE cover this instead of shoe-horning it into the \"SET\nconfiguration_parameter\" syntax specification even though the syntax is\nnearly identical. It is indeed a different mechanic that just happens to\nshare a similar syntax. (On that note, does \"FROM CURRENT\" work with\n\"ROLE\"?)\n\nI'm a bit indifferent on the wording for RESET ROLE, though it should\nprobably mirror whatever wording we use for GUCs since this behaves like\none even if it isn't one technically.\n\nDavid J.\n\nOn Thu, Mar 11, 2021 at 7:58 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:I think we should *not* document that under \"server configuration\".\nThis is confusing and will lead people to think that a role is\na configuration parameter.  But you cannot add\n\n   role = myrole\n\nto \"postgresql.conf\".  A role is not a GUC.\n\nI think that the place to document this is\ndoc/src/sgml/ref/alter_role.sgml.Good point.  I agree that another syntax specification should be added to ALTER ROLE/DATABASE cover this instead of shoe-horning it into the \"SET configuration_parameter\" syntax specification even though the syntax is nearly identical.  It is indeed a different mechanic that just happens to share a similar syntax. (On that note, does \"FROM CURRENT\" work with \"ROLE\"?)I'm a bit indifferent on the wording for RESET ROLE, though it should probably mirror whatever wording we use for GUCs since this behaves like one even if it isn't one technically.David J.", "msg_date": "Thu, 11 Mar 2021 09:08:47 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "Thanks for reviewing.\r\n\r\nOn 3/11/21, 6:59 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\r\n> I have had a look at the patch, and while I agree that this should\r\n> be documented, I am not happy with the patch as it is.\r\n>\r\n> I think we should *not* document that under \"server configuration\".\r\n> This is confusing and will lead people to think that a role is\r\n> a configuration parameter. But you cannot add\r\n>\r\n> role = myrole\r\n>\r\n> to \"postgresql.conf\". A role is not a GUC.\r\n>\r\n> I think that the place to document this is\r\n> doc/src/sgml/ref/alter_role.sgml.\r\n\r\nI don't think I totally agree that \"role\" and \"session_authorization\"\r\naren't GUCs. They are defined in guc.c, and \"role\" is referred to as\r\na GUC in both miscinit.c and variable.c. Plus, they are usable as\r\nconfiguration parameters in many of the same ways that ordinary GUCs\r\nare (e.g., SET, ALTER ROLE, ALTER DATABASE). It is true that \"role\"\r\nand \"session_authorization\" cannot be set in postgresql.conf and ALTER\r\nSYSTEM SET, and I think we should add a note to this effect in the\r\ndocumentation. However, I don't see the value in duplicating a\r\nparagraph about \"role\" and \"session_authorization\" in a number of\r\nstatements that already accept a configuration_parameter and point to\r\nthe chapter on Server Configuration.\r\n\r\nI do agree that that adding these parameters to the Server\r\nConfiguration section is a bit confusing. At the very least, the\r\nproposed patch would add them to the Client Connection Defaults\r\nsection, but it's still not ideal. This is probably why they've been\r\nleft out so far.\r\n\r\n> The second hunk of the patch is in the right place:\r\n>\r\n> --- a/doc/src/sgml/ref/set_role.sgml\r\n> +++ b/doc/src/sgml/ref/set_role.sgml\r\n> @@ -53,9 +53,13 @@ RESET ROLE\r\n> </para>\r\n>\r\n> <para>\r\n> - The <literal>NONE</literal> and <literal>RESET</literal> forms reset the current\r\n> - user identifier to be the current session user identifier.\r\n> - These forms can be executed by any user.\r\n> + The <literal>NONE</literal> form resets the current user identifier to the\r\n> + current session user identifier. The <literal>RESET</literal> form resets\r\n> + the current user identifier to the default value. The default value can be\r\n> + the command-line option value, the per-database default setting, or the\r\n> + per-user default setting for the originally authenticated session user, if\r\n> + any such settings exist. Otherwise, the default value will be the current\r\n> + session user identifier. These forms can be executed by any user.\r\n> </para>\r\n> </refsect1>\r\n>\r\n> Perhaps this could be reworded in a simpler fashion, like:\r\n>\r\n> <literal>SET ROLE NONE</literal> sets the user identifier to the current\r\n> session identifier, as returned by the <function>session_user</function>\r\n> function. <literal>RESET ROLE</literal> sets the user identifier to the\r\n> value it had after you connected to the database. This can be different\r\n> from the session identifier if <literal>ALTER DATABASE</literal> or\r\n> <literal>ALTER ROLE</literal> were used to assign a different default role.\r\n>\r\n> (I hope what I wrote is correct.)\r\n\r\nI like the simpler text, but I think it is missing a couple of things.\r\nIf no session default was set, RESET ROLE will set the role to the\r\ncurrent session user identifier, which can either be what it was when\r\nyou first connected or what you SET SESSION AUTHORIZATION to. The\r\nother thing missing is that \"role\" can be set via the command-line\r\noptions, too. Here's an attempt at including those things:\r\n\r\n <literal>SET ROLE NONE</literal> sets the current user identifier to the\r\n current session user identifier, as returned by\r\n <function>session_user</function>. <literal>RESET ROLE</literal> sets the\r\n current user identifier to the connection-time setting specified by the\r\n <link linkend=\"libpq-connect-options\">command-line options</link>,\r\n <link linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link>, or\r\n <link linkend=\"sql-alterdatabase\"><command>ALTER DATABASE</command></link>,\r\n if any such settings exist. Otherwise, <literal>RESET ROLE</literal> sets\r\n the current user identifier to the current session user identifier. These\r\n forms can be executed by any user.\r\n\r\nI attached a new version of my proposed patch, but I acknowledge that\r\nmuch of it is still under discussion.\r\n\r\nNathan", "msg_date": "Thu, 11 Mar 2021 20:00:04 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Thursday, March 11, 2021, Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> Thanks for reviewing.\n>\n> On 3/11/21, 6:59 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\n> > I have had a look at the patch, and while I agree that this should\n> > be documented, I am not happy with the patch as it is.\n> >\n> > I think we should *not* document that under \"server configuration\".\n> > This is confusing and will lead people to think that a role is\n> > a configuration parameter. But you cannot add\n> >\n> > role = myrole\n> >\n> > to \"postgresql.conf\". A role is not a GUC.\n> >\n> > I think that the place to document this is\n> > doc/src/sgml/ref/alter_role.sgml.\n>\n> I don't think I totally agree that \"role\" and \"session_authorization\"\n> aren't GUCs. They are defined in guc.c, and \"role\" is referred to as\n> a GUC in both miscinit.c and variable.c.\n\n\n>\nImplementation details are not that convincing to me. As a user I wouldn’t\nthink of these as being “server configuration” or even “client defaults”;\ntypically they are just representations of me as session state.\n\nThe minor bit of documentation pseudo-redundancy doesn’t bother me if I\naccept they are there own separate thing. The fact that set role and set\nsession authorization are entirely distinct top-level commands in our\ndocumentation, as opposed to bundled in with plain set, is a much more\nconvincing example for treating them uniquely and not just additional GUCs.\n\nDavid J.\n\nOn Thursday, March 11, 2021, Bossart, Nathan <bossartn@amazon.com> wrote:Thanks for reviewing.\n\nOn 3/11/21, 6:59 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\n> I have had a look at the patch, and while I agree that this should\n> be documented, I am not happy with the patch as it is.\n>\n> I think we should *not* document that under \"server configuration\".\n> This is confusing and will lead people to think that a role is\n> a configuration parameter.  But you cannot add\n>\n>    role = myrole\n>\n> to \"postgresql.conf\".  A role is not a GUC.\n>\n> I think that the place to document this is\n> doc/src/sgml/ref/alter_role.sgml.\n\nI don't think I totally agree that \"role\" and \"session_authorization\"\naren't GUCs.  They are defined in guc.c, and \"role\" is referred to as\na GUC in both miscinit.c and variable.c.Implementation details are not that convincing to me.  As a user I wouldn’t think of these as being “server configuration” or even “client defaults”; typically they are just representations of me as session state. The minor bit of documentation pseudo-redundancy doesn’t bother me if I accept they are there own separate thing.  The fact that set role and set session authorization are entirely distinct top-level commands in our documentation, as opposed to bundled in with plain set, is a much more convincing example for treating them uniquely and not just additional GUCs.David J.", "msg_date": "Thu, 11 Mar 2021 13:09:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 3/11/21, 12:11 PM, \"David G. Johnston\" <david.g.johnston@gmail.com> wrote:\r\n> On Thursday, March 11, 2021, Bossart, Nathan <bossartn@amazon.com> wrote:\r\n>> Thanks for reviewing.\r\n>>\r\n>> On 3/11/21, 6:59 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\r\n>>> I have had a look at the patch, and while I agree that this should\r\n>>> be documented, I am not happy with the patch as it is.\r\n>>>\r\n>>> I think we should *not* document that under \"server configuration\".\r\n>>> This is confusing and will lead people to think that a role is\r\n>>> a configuration parameter. But you cannot add\r\n>>>\r\n>>> role = myrole\r\n>>>\r\n>>> to \"postgresql.conf\". A role is not a GUC.\r\n>>>\r\n>>> I think that the place to document this is\r\n>>> doc/src/sgml/ref/alter_role.sgml.\r\n>>\r\n>> I don't think I totally agree that \"role\" and \"session_authorization\"\r\n>> aren't GUCs. They are defined in guc.c, and \"role\" is referred to as\r\n>> a GUC in both miscinit.c and variable.c.\r\n>\r\n> Implementation details are not that convincing to me. As a user I wouldn’t think of these as being “server configuration” or even “client defaults”; typically they are just representations of me as session state.\r\n> \r\n> The minor bit of documentation pseudo-redundancy doesn’t bother me if I accept they are there own separate thing. The fact that set role and set session authorization are entirely distinct top-level commands in our documentation, as opposed to bundled in with plain set, is a much more convincing example for treating them uniquely and not just additional GUCs.\r\n\r\nI see your point. What do you think about something like the attached\r\npatch?\r\n\r\nNathan", "msg_date": "Thu, 11 Mar 2021 22:30:49 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Thu, 2021-03-11 at 22:30 +0000, Bossart, Nathan wrote:\n> On 3/11/21, 12:11 PM, \"David G. Johnston\" <david.g.johnston@gmail.com> wrote:\n> > The minor bit of documentation pseudo-redundancy doesn’t bother me if I accept\n> > they are there own separate thing. The fact that set role and set session\n> > authorization are entirely distinct top-level commands in our documentation,\n> > as opposed to bundled in with plain set, is a much more convincing example\n> > for treating them uniquely and not just additional GUCs.\n> \n> I see your point. What do you think about something like the attached\n> patch?\n\nAfter sleeping on it, I have come to think that it is excessive to write\nso much documentation for a feature that is that unimportant.\n\nIt takes some effort to come up with a good use case for it.\n\nI think we can add a few lines to ALTER ROLE, perhaps ALTER DATABASE\n(although I don't see what sense it could make to set that on the database level),\nand briefly explain the difference between RESET ROLE and SET ROLE NONE.\n\nI think adding too much detail will harm - anyone who needs to know the\nexact truth can resort to the implementation.\n\nI'll try to come up with a proposal later.\n\nYours,\nLaurenz Albe\n\n\n\n\n\n", "msg_date": "Fri, 12 Mar 2021 10:16:18 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Fri, 2021-03-12 at 10:16 +0100, I wrote:\n> After sleeping on it, I have come to think that it is excessive to write\n> so much documentation for a feature that is that unimportant.\n> \n> It takes some effort to come up with a good use case for it.\n> \n> I think we can add a few lines to ALTER ROLE, perhaps ALTER DATABASE\n> (although I don't see what sense it could make to set that on the database level),\n> and briefly explain the difference between RESET ROLE and SET ROLE NONE.\n> \n> I think adding too much detail will harm - anyone who needs to know the\n> exact truth can resort to the implementation.\n> \n> I'll try to come up with a proposal later.\n\nAttached is my idea of the documentation change.\n\nI think that ALTER DATABASE ... SET ROLE can remain undocumented, because\nI cannot imagine that it could be useful.\n\nI am unsure if specifying \"role\" in a libpq connect string might be\nworth documenting. Can you think of a use case?\n\nYours,\nLaurenz Albe", "msg_date": "Fri, 12 Mar 2021 15:35:28 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Fri, Mar 12, 2021 at 7:35 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Fri, 2021-03-12 at 10:16 +0100, I wrote:\n> > After sleeping on it, I have come to think that it is excessive to write\n> > so much documentation for a feature that is that unimportant.\n> >\n> > It takes some effort to come up with a good use case for it.\n> >\n> > I think we can add a few lines to ALTER ROLE, perhaps ALTER DATABASE\n> > (although I don't see what sense it could make to set that on the\n> database level),\n> > and briefly explain the difference between RESET ROLE and SET ROLE NONE.\n> >\n> > I think adding too much detail will harm - anyone who needs to know the\n> > exact truth can resort to the implementation.\n> >\n> > I'll try to come up with a proposal later.\n>\n> Attached is my idea of the documentation change.\n>\n> I think that ALTER DATABASE ... SET ROLE can remain undocumented, because\n> I cannot imagine that it could be useful.\n>\n> I am unsure if specifying \"role\" in a libpq connect string might be\n> worth documenting. Can you think of a use case?\n>\n\nDoes our imagination really matter here? It works and is just as \"useful\"\nas \"ALTER ROLE\" and so should be documented if we document ALTER ROLE.\n\nI agree that ALTER DATABASE seems entirely useless and even\ncounter-productive...but I would still document if only because we document\nALTER ROLE and they should be kept similar.\n\nHaven't formed an opinion on the merits of the two patches.\n\nDavid J.\n\nOn Fri, Mar 12, 2021 at 7:35 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Fri, 2021-03-12 at 10:16 +0100, I wrote:\n> After sleeping on it, I have come to think that it is excessive to write\n> so much documentation for a feature that is that unimportant.\n> \n> It takes some effort to come up with a good use case for it.\n> \n> I think we can add a few lines to ALTER ROLE, perhaps ALTER DATABASE\n> (although I don't see what sense it could make to set that on the database level),\n> and briefly explain the difference between RESET ROLE and SET ROLE NONE.\n> \n> I think adding too much detail will harm - anyone who needs to know the\n> exact truth can resort to the implementation.\n> \n> I'll try to come up with a proposal later.\n\nAttached is my idea of the documentation change.\n\nI think that ALTER DATABASE ... SET ROLE can remain undocumented, because\nI cannot imagine that it could be useful.\n\nI am unsure if specifying \"role\" in a libpq connect string might be\nworth documenting.  Can you think of a use case?Does our imagination really matter here?  It works and is just as \"useful\" as \"ALTER ROLE\" and so should be documented if we document ALTER ROLE.I agree that ALTER DATABASE seems entirely useless and even counter-productive...but I would still document if only because we document ALTER ROLE and they should be kept similar.Haven't formed an opinion on the merits of the two patches.David J.", "msg_date": "Fri, 12 Mar 2021 07:45:01 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 3/12/21, 6:35 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\r\n> On Fri, 2021-03-12 at 10:16 +0100, I wrote:\r\n>> After sleeping on it, I have come to think that it is excessive to write\r\n>> so much documentation for a feature that is that unimportant.\r\n>>\r\n>> It takes some effort to come up with a good use case for it.\r\n>>\r\n>> I think we can add a few lines to ALTER ROLE, perhaps ALTER DATABASE\r\n>> (although I don't see what sense it could make to set that on the database level),\r\n>> and briefly explain the difference between RESET ROLE and SET ROLE NONE.\r\n>>\r\n>> I think adding too much detail will harm - anyone who needs to know the\r\n>> exact truth can resort to the implementation.\r\n>>\r\n>> I'll try to come up with a proposal later.\r\n>\r\n> Attached is my idea of the documentation change.\r\n>\r\n> I think that ALTER DATABASE ... SET ROLE can remain undocumented, because\r\n> I cannot imagine that it could be useful.\r\n>\r\n> I am unsure if specifying \"role\" in a libpq connect string might be\r\n> worth documenting. Can you think of a use case?\r\n\r\nMy main goal of this thread is to get the RESET ROLE documentation\r\nfixed. I don't have a terribly strong opinion on documenting these\r\nspecial uses of \"role\". I lean in favor of adding it, but I wouldn't\r\nbe strongly opposed to simply leaving it out for now. But if we're\r\ngoing to add it, I think we might as well add it everywhere.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 12 Mar 2021 18:16:15 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 3/12/21 1:16 PM, Bossart, Nathan wrote:\n> On 3/12/21, 6:35 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\n>> On Fri, 2021-03-12 at 10:16 +0100, I wrote:\n>>> After sleeping on it, I have come to think that it is excessive to write\n>>> so much documentation for a feature that is that unimportant.\n>>>\n>>> It takes some effort to come up with a good use case for it.\n>>>\n>>> I think we can add a few lines to ALTER ROLE, perhaps ALTER DATABASE\n>>> (although I don't see what sense it could make to set that on the database level),\n>>> and briefly explain the difference between RESET ROLE and SET ROLE NONE.\n>>>\n>>> I think adding too much detail will harm - anyone who needs to know the\n>>> exact truth can resort to the implementation.\n>>>\n>>> I'll try to come up with a proposal later.\n>>\n>> Attached is my idea of the documentation change.\n>>\n>> I think that ALTER DATABASE ... SET ROLE can remain undocumented, because\n>> I cannot imagine that it could be useful.\n>>\n>> I am unsure if specifying \"role\" in a libpq connect string might be\n>> worth documenting. Can you think of a use case?\n> \n> My main goal of this thread is to get the RESET ROLE documentation\n> fixed. I don't have a terribly strong opinion on documenting these\n> special uses of \"role\". I lean in favor of adding it, but I wouldn't\n> be strongly opposed to simply leaving it out for now. But if we're\n> going to add it, I think we might as well add it everywhere.\n\n\nLooking back at the commit history it seems to me that this only works \naccidentally. Perhaps it would be best to fix RESET ROLE and be done with it.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Fri, 12 Mar 2021 14:13:44 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 3/12/21, 11:14 AM, \"Joe Conway\" <mail@joeconway.com> wrote:\r\n> On 3/12/21 1:16 PM, Bossart, Nathan wrote:\r\n>> My main goal of this thread is to get the RESET ROLE documentation\r\n>> fixed. I don't have a terribly strong opinion on documenting these\r\n>> special uses of \"role\". I lean in favor of adding it, but I wouldn't\r\n>> be strongly opposed to simply leaving it out for now. But if we're\r\n>> going to add it, I think we might as well add it everywhere.\r\n>\r\n>\r\n> Looking back at the commit history it seems to me that this only works\r\n> accidentally. Perhaps it would be best to fix RESET ROLE and be done with it.\r\n\r\nThat seems reasonable to me.\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 12 Mar 2021 21:41:15 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Fri, 2021-03-12 at 21:41 +0000, Bossart, Nathan wrote:\n> On 3/12/21, 11:14 AM, \"Joe Conway\" <mail@joeconway.com> wrote:\n> > Looking back at the commit history it seems to me that this only works\n> > accidentally. Perhaps it would be best to fix RESET ROLE and be done with it.\n> \n> That seems reasonable to me.\n\n+1 from me too.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 15:05:54 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 3/15/21, 7:06 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\r\n> On Fri, 2021-03-12 at 21:41 +0000, Bossart, Nathan wrote:\r\n>> On 3/12/21, 11:14 AM, \"Joe Conway\" <mail@joeconway.com> wrote:\r\n>> > Looking back at the commit history it seems to me that this only works\r\n>> > accidentally. Perhaps it would be best to fix RESET ROLE and be done with it.\r\n>>\r\n>> That seems reasonable to me.\r\n>\r\n> +1 from me too.\r\n\r\nHere's my latest attempt. I think it's important to state that it\r\nsets the role to the current session user identifier unless there is a\r\nconnection-time setting. If there is no connection-time setting, it\r\nwill reset the role to the current session user, which might be\r\ndifferent if you've run SET SESSION AUTHORIZATION.\r\n\r\ndiff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\r\nindex 739f2c5cdf..f02babf3af 100644\r\n--- a/doc/src/sgml/ref/set_role.sgml\r\n+++ b/doc/src/sgml/ref/set_role.sgml\r\n@@ -53,9 +53,16 @@ RESET ROLE\r\n </para>\r\n\r\n <para>\r\n- The <literal>NONE</literal> and <literal>RESET</literal> forms reset the current\r\n- user identifier to be the current session user identifier.\r\n- These forms can be executed by any user.\r\n+ <literal>SET ROLE NONE</literal> sets the current user identifier to the\r\n+ current session user identifier, as returned by\r\n+ <function>session_user</function>. <literal>RESET ROLE</literal> sets the\r\n+ current user identifier to the connection-time setting specified by the\r\n+ <link linkend=\"libpq-connect-options\">command-line options</link>,\r\n+ <link linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link>, or\r\n+ <link linkend=\"sql-alterdatabase\"><command>ALTER DATABASE</command></link>,\r\n+ if any such settings exist. Otherwise, <literal>RESET ROLE</literal> sets\r\n+ the current user identifier to the current session user identifier. These\r\n+ forms can be executed by any user.\r\n </para>\r\n </refsect1>\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 15 Mar 2021 17:09:00 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On Mon, 2021-03-15 at 17:09 +0000, Bossart, Nathan wrote:\n> On 3/15/21, 7:06 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\n> > On Fri, 2021-03-12 at 21:41 +0000, Bossart, Nathan wrote:\n> > > On 3/12/21, 11:14 AM, \"Joe Conway\" <mail@joeconway.com> wrote:\n> > > > Looking back at the commit history it seems to me that this only works\n> > > > accidentally. Perhaps it would be best to fix RESET ROLE and be done with it.\n> > > \n> > > That seems reasonable to me.\n> > \n> > +1 from me too.\n> \n> Here's my latest attempt. I think it's important to state that it\n> sets the role to the current session user identifier unless there is a\n> connection-time setting. If there is no connection-time setting, it\n> will reset the role to the current session user, which might be\n> different if you've run SET SESSION AUTHORIZATION.\n> \n> diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\n> index 739f2c5cdf..f02babf3af 100644\n> --- a/doc/src/sgml/ref/set_role.sgml\n> +++ b/doc/src/sgml/ref/set_role.sgml\n> @@ -53,9 +53,16 @@ RESET ROLE\n> </para>\n> \n> <para>\n> - The <literal>NONE</literal> and <literal>RESET</literal> forms reset the current\n> - user identifier to be the current session user identifier.\n> - These forms can be executed by any user.\n> + <literal>SET ROLE NONE</literal> sets the current user identifier to the\n> + current session user identifier, as returned by\n> + <function>session_user</function>. <literal>RESET ROLE</literal> sets the\n> + current user identifier to the connection-time setting specified by the\n> + <link linkend=\"libpq-connect-options\">command-line options</link>,\n> + <link linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link>, or\n> + <link linkend=\"sql-alterdatabase\"><command>ALTER DATABASE</command></link>,\n> + if any such settings exist. Otherwise, <literal>RESET ROLE</literal> sets\n> + the current user identifier to the current session user identifier. These\n> + forms can be executed by any user.\n> </para>\n> </refsect1>\n\nActually, SET ROLE NONE is defined by the SQL standard:\n\n 18.3 <set role statement>\n\n [...]\n\n If NONE is specified, then\n Case:\n i) If there is no current user identifier, then an exception condition is raised:\n invalid role specification.\n ii) Otherwise, the current role name is removed.\n\nThis is reflected in a comment in src/backend/commands/variable.c:\n\n /*\n * SET ROLE\n *\n * The SQL spec requires \"SET ROLE NONE\" to unset the role, so we hardwire\n * a translation of \"none\" to InvalidOid. Otherwise this is much like\n * SET SESSION AUTHORIZATION.\n */\n\nOn the other hand, RESET (according to src/backend/utils/misc/README)\ndoes something different:\n\n Prior values of configuration variables must be remembered in order to deal\n with several special cases: RESET (a/k/a SET TO DEFAULT)\n\nSo I think it is intentional that RESET ROLE does something else than\nSET ROLE NONE, and we should not change that.\n\nSo I think that documenting this is the way to go. I'll mark it as\n\"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 02 Apr 2021 16:21:08 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 4/2/21 10:21 AM, Laurenz Albe wrote:\n> On Mon, 2021-03-15 at 17:09 +0000, Bossart, Nathan wrote:\n>> On 3/15/21, 7:06 AM, \"Laurenz Albe\" <laurenz.albe@cybertec.at> wrote:\n>> > On Fri, 2021-03-12 at 21:41 +0000, Bossart, Nathan wrote:\n>> > > On 3/12/21, 11:14 AM, \"Joe Conway\" <mail@joeconway.com> wrote:\n>> > > > Looking back at the commit history it seems to me that this only works\n>> > > > accidentally. Perhaps it would be best to fix RESET ROLE and be done with it.\n>> > > \n>> > > That seems reasonable to me.\n>> > \n>> > +1 from me too.\n>> \n>> Here's my latest attempt. I think it's important to state that it\n>> sets the role to the current session user identifier unless there is a\n>> connection-time setting. If there is no connection-time setting, it\n>> will reset the role to the current session user, which might be\n>> different if you've run SET SESSION AUTHORIZATION.\n>> \n>> diff --git a/doc/src/sgml/ref/set_role.sgml b/doc/src/sgml/ref/set_role.sgml\n>> index 739f2c5cdf..f02babf3af 100644\n>> --- a/doc/src/sgml/ref/set_role.sgml\n>> +++ b/doc/src/sgml/ref/set_role.sgml\n>> @@ -53,9 +53,16 @@ RESET ROLE\n>> </para>\n>> \n>> <para>\n>> - The <literal>NONE</literal> and <literal>RESET</literal> forms reset the current\n>> - user identifier to be the current session user identifier.\n>> - These forms can be executed by any user.\n>> + <literal>SET ROLE NONE</literal> sets the current user identifier to the\n>> + current session user identifier, as returned by\n>> + <function>session_user</function>. <literal>RESET ROLE</literal> sets the\n>> + current user identifier to the connection-time setting specified by the\n>> + <link linkend=\"libpq-connect-options\">command-line options</link>,\n>> + <link linkend=\"sql-alterrole\"><command>ALTER ROLE</command></link>, or\n>> + <link linkend=\"sql-alterdatabase\"><command>ALTER DATABASE</command></link>,\n>> + if any such settings exist. Otherwise, <literal>RESET ROLE</literal> sets\n>> + the current user identifier to the current session user identifier. These\n>> + forms can be executed by any user.\n>> </para>\n>> </refsect1>\n> \n> Actually, SET ROLE NONE is defined by the SQL standard:\n> \n> 18.3 <set role statement>\n> \n> [...]\n> \n> If NONE is specified, then\n> Case:\n> i) If there is no current user identifier, then an exception condition is raised:\n> invalid role specification.\n> ii) Otherwise, the current role name is removed.\n> \n> This is reflected in a comment in src/backend/commands/variable.c:\n> \n> /*\n> * SET ROLE\n> *\n> * The SQL spec requires \"SET ROLE NONE\" to unset the role, so we hardwire\n> * a translation of \"none\" to InvalidOid. Otherwise this is much like\n> * SET SESSION AUTHORIZATION.\n> */\n> \n> On the other hand, RESET (according to src/backend/utils/misc/README)\n> does something different:\n> \n> Prior values of configuration variables must be remembered in order to deal\n> with several special cases: RESET (a/k/a SET TO DEFAULT)\n> \n> So I think it is intentional that RESET ROLE does something else than\n> SET ROLE NONE, and we should not change that.\n> \n> So I think that documenting this is the way to go. I'll mark it as\n> \"ready for committer\".\n\npushed\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Fri, 2 Apr 2021 13:53:31 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: documentation fix for SET ROLE" }, { "msg_contents": "On 4/2/21, 10:54 AM, \"Joe Conway\" <mail@joeconway.com> wrote:\r\n> On 4/2/21 10:21 AM, Laurenz Albe wrote:\r\n>> So I think that documenting this is the way to go. I'll mark it as\r\n>> \"ready for committer\".\r\n>\r\n> pushed\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 2 Apr 2021 18:01:47 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": true, "msg_subject": "Re: documentation fix for SET ROLE" } ]
[ { "msg_contents": "Hello,\n\nIn another thread[1], I proposed $SUBJECT, but then we found a better\nsolution to that thread's specific problem. The general idea is still\ngood though: it's possible to (1) replace several existing copies of\nour qsort algorithm with one, and (2) make new specialised versions a\nbit more easily than the existing Perl generator allows. So, I'm back\nwith a rebased stack of patches. I'll leave specific cases for new\nworthwhile specialisations for separate proposals; I've heard about\nseveral.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKMQFVpjr106gRhwk6R-nXv0qOcTreZuQzxgpHESAL6dw%40mail.gmail.com", "msg_date": "Thu, 18 Feb 2021 16:09:49 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "A qsort template" }, { "msg_contents": "Hi,\n\nOn 2021-02-18 16:09:49 +1300, Thomas Munro wrote:\n> In another thread[1], I proposed $SUBJECT, but then we found a better\n> solution to that thread's specific problem. The general idea is still\n> good though: it's possible to (1) replace several existing copies of\n> our qsort algorithm with one, and (2) make new specialised versions a\n> bit more easily than the existing Perl generator allows. So, I'm back\n> with a rebased stack of patches. I'll leave specific cases for new\n> worthwhile specialisations for separate proposals; I've heard about\n> several.\n\nOne place that could benefit is the qsort that BufferSync() does at the\nstart. I tried your patch for that, and it does reduce the sort time\nconsiderably. For 64GB of mostly dirty shared_buffers from ~1.4s to\n0.6s.\n\nNow, obviously one can argue that that's not going to be the crucial\nspot, and wouldn't be entirely wrong. OTOH, in my AIO branch I see\ncheckpointer doing ~10GB/s, leading to the sort being a measurable\nportion of the overall time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Feb 2021 22:02:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "> On 18 Feb 2021, at 04:09, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> In another thread[1], I proposed $SUBJECT, but then we found a better\n> solution to that thread's specific problem. The general idea is still\n> good though: it's possible to (1) replace several existing copies of\n> our qsort algorithm with one, and (2) make new specialised versions a\n> bit more easily than the existing Perl generator allows. So, I'm back\n> with a rebased stack of patches. I'll leave specific cases for new\n> worthwhile specialisations for separate proposals; I've heard about\n> several.\n\nJust to play around with this while reviewing I made a qsort_strcmp, like in\nthe attached, and tested it using a ~9M word [0] randomly shuffled wordlist.\nWhile being too small input to make any meaningful difference in runtime (it\nshaved a hair off but it might well be within the error margin) there was no\nregression either. More importantly, it was really simple and quick to make a\ntailored qsort which is the intention with the patch. While still being a bit\nof magic, moving from the Perl generator makes this slightly less magic IMO so\n+1 on this approach.\n\nA tiny nitpick on the patch itself:\n\n+ * - ST_COMPARE(a, b) - a simple comparison expression\n+ * - ST_COMPARE(a, b, arg) - variant that takes an extra argument\nIndentation.\n\nAll tests pass and the documentation in the the sort_template.h is enough to go\non, but I would prefer to see a comment in port/qsort.c referring back to\nsort_template.h for documentation.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://github.com/dwyl/english-words/ shuffled 20 times over", "msg_date": "Tue, 2 Mar 2021 22:25:39 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Wed, Mar 3, 2021 at 10:25 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 18 Feb 2021, at 04:09, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > In another thread[1], I proposed $SUBJECT, but then we found a better\n> > solution to that thread's specific problem. The general idea is still\n> > good though: it's possible to (1) replace several existing copies of\n> > our qsort algorithm with one, and (2) make new specialised versions a\n> > bit more easily than the existing Perl generator allows. So, I'm back\n> > with a rebased stack of patches. I'll leave specific cases for new\n> > worthwhile specialisations for separate proposals; I've heard about\n> > several.\n>\n> Just to play around with this while reviewing I made a qsort_strcmp, like in\n> the attached, and tested it using a ~9M word [0] randomly shuffled wordlist.\n> While being too small input to make any meaningful difference in runtime (it\n> shaved a hair off but it might well be within the error margin) there was no\n> regression either. More importantly, it was really simple and quick to make a\n> tailored qsort which is the intention with the patch. While still being a bit\n> of magic, moving from the Perl generator makes this slightly less magic IMO so\n> +1 on this approach.\n\nThanks for testing and reviewing!\n\n> A tiny nitpick on the patch itself:\n>\n> + * - ST_COMPARE(a, b) - a simple comparison expression\n> + * - ST_COMPARE(a, b, arg) - variant that takes an extra argument\n> Indentation.\n\nFixed. Also ran pgindent.\n\n> All tests pass and the documentation in the the sort_template.h is enough to go\n> on, but I would prefer to see a comment in port/qsort.c referring back to\n> sort_template.h for documentation.\n\nI tried adding a comment along the lines \"see lib/sort_template.h for\ndetails\", but it felt pretty redundant, when the file contains very\nlittle other than #include \"lib/sort_template.h\" which should already\ntell you to go and look there to find out what this is about...\n\nI went ahead and pushed these.\n\nI am sure there are plenty of opportunities to experiment with this\ncode. Here are some I recall Peter Geoghegan mentioning:\n\n1. If you know that elements are unique, you could remove some\nbranches that deal with equal elements (see \"r == 0\").\n2. Perhaps you might want to be able to disable the \"presorted\" check\nin some cases?\n3. The parameters 7, 7 and 40 were probably tuned for an ancient Vax\nor similar[1]. We see higher insertion sort thesholds such as 27 in\nmore recent sort algorithms[2] used in eg the JVM. You could perhaps\nspeculate that the right answer depends in part on the element size; I\ndunno, but if so, here we have that at compile time while traditional\nqsort() does not.\n\nAs for which cases are actually worth specialising, I've attached the\nexample that Andres mentioned earlier; it seems like a reasonable\ncandidate to go ahead and commit too, but I realised that I'd\nforgotten to attach it earlier.\n\nIt's possible that the existing support sorting tuples could be\nfurther specialised for common sort key data types; I haven't tried\nthat.\n\n[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.14.8162&rep=rep1&type=pdf\n[2] https://codeblab.com/wp-content/uploads/2009/09/DualPivotQuicksort.pdf", "msg_date": "Wed, 3 Mar 2021 17:17:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nI wish we had the same for bsearch... :)\n\n\nOn 2021-03-03 17:17:13 +1300, Thomas Munro wrote:\n> As for which cases are actually worth specialising, I've attached the\n> example that Andres mentioned earlier; it seems like a reasonable\n> candidate to go ahead and commit too, but I realised that I'd\n> forgotten to attach it earlier.\n\n> From 4cec5cb9a2e0c50726b7337fb8221281e155c4cd Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Thu, 18 Feb 2021 14:47:28 +1300\n> Subject: [PATCH] Specialize checkpointer sort functions.\n> \n> When sorting a potentially large number of dirty buffers, the\n> checkpointer can benefit from a faster sort routine. One reported\n> improvement on a large buffer pool system was 1.4s -> 0.6s.\n> \n> Discussion: https://postgr.es/m/CA%2BhUKGJ2-eaDqAum5bxhpMNhvuJmRDZxB_Tow0n-gse%2BHG0Yig%40mail.gmail.com\n\nLooks good to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Mar 2021 10:58:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Mar 12, 2021 at 7:58 AM Andres Freund <andres@anarazel.de> wrote:\n> I wish we had the same for bsearch... :)\n\nGlibc already has the definition of the traditional void-based\nfunction in /usr/include/bits/stdlib-bsearch.h, so the generated code\nwhen the compiler can see the comparator definition is already good in\neg lazy_tid_reaped() and eg some nbtree search routines. We could\nprobably expose more trivial comparators in headers to get more of\nthat, and we could perhaps put our own bsearch definition in a header\nfor other platforms that didn't think of that...\n\nIt might be worth doing type-safe macro templates as well, though (as\nI already did in an earlier proposal[1]), just to have nice type safe\ncode though, not sure, I'm thinking about that...\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLY47Cvu62mFDT53Ya0P95cGggcBN6R6aLpx6%3DGm5j%2B1A%40mail.gmail.com\n\n\n", "msg_date": "Sat, 13 Mar 2021 15:49:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sat, Mar 13, 2021 at 3:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Mar 12, 2021 at 7:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > I wish we had the same for bsearch... :)\n>\n> Glibc already has the definition of the traditional void-based\n> function in /usr/include/bits/stdlib-bsearch.h, so the generated code\n> when the compiler can see the comparator definition is already good in\n> eg lazy_tid_reaped() and eg some nbtree search routines. We could\n> probably expose more trivial comparators in headers to get more of\n> that, and we could perhaps put our own bsearch definition in a header\n> for other platforms that didn't think of that...\n>\n> It might be worth doing type-safe macro templates as well, though (as\n> I already did in an earlier proposal[1]), just to have nice type safe\n> code though, not sure, I'm thinking about that...\n\nI remembered a very good reason to do this: the ability to do\nbranch-free comparators in more places by introducing optional wider\nresults. That's good for TIDs (needs 49 bits), and places that want\nto \"reverse\" a traditional comparator (just doing -result on an int\ncomparator that might theoretically return INT_MIN requires at least\n33 bits). So I rebased the relevant parts of my earlier version, and\nwent through and wrote a bunch of examples to demonstrate all this\nstuff actually working.\n\nThere are two categories of change in these patches:\n\n0002-0005: Places that sort/unique/search OIDs, BlockNumbers and TIDs,\nwhich can reuse a small set of typed functions (a few more could be\nadded, if useful). See sortitemptr.h and sortscalar.h. Mostly this\nis just a notational improvement, and an excuse to drop a bunch of\nduplicated code. In a few places this might really speed something\nimportant up! Like VACUUM's lazy_tid_reaped().\n\n0006-0009. Places where a specialised function is generated for one\nspecial purpose, such as ANALYZE's HeapTuple sort, tidbitmap.c's\npagetable sort, some places in nbtree code etc. These may require\nsome case-by-case research on whether the extra executable size is\nworth the speedup, and there are surely more opportunities like that;\nI just picked on these arbitrarily.", "msg_date": "Sun, 14 Mar 2021 15:35:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nFor 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :\n\n+ * Remove duplicates from an array. Return the new size.\n+ */\n+ST_SCOPE size_t\n+ST_UNIQUE(ST_ELEMENT_TYPE *array,\n\nThe array is supposed to be sorted, right ?\nThe comment should mention this.\n\nCheers\n\nOn Sat, Mar 13, 2021 at 6:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sat, Mar 13, 2021 at 3:49 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Fri, Mar 12, 2021 at 7:58 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > I wish we had the same for bsearch... :)\n> >\n> > Glibc already has the definition of the traditional void-based\n> > function in /usr/include/bits/stdlib-bsearch.h, so the generated code\n> > when the compiler can see the comparator definition is already good in\n> > eg lazy_tid_reaped() and eg some nbtree search routines. We could\n> > probably expose more trivial comparators in headers to get more of\n> > that, and we could perhaps put our own bsearch definition in a header\n> > for other platforms that didn't think of that...\n> >\n> > It might be worth doing type-safe macro templates as well, though (as\n> > I already did in an earlier proposal[1]), just to have nice type safe\n> > code though, not sure, I'm thinking about that...\n>\n> I remembered a very good reason to do this: the ability to do\n> branch-free comparators in more places by introducing optional wider\n> results. That's good for TIDs (needs 49 bits), and places that want\n> to \"reverse\" a traditional comparator (just doing -result on an int\n> comparator that might theoretically return INT_MIN requires at least\n> 33 bits). So I rebased the relevant parts of my earlier version, and\n> went through and wrote a bunch of examples to demonstrate all this\n> stuff actually working.\n>\n> There are two categories of change in these patches:\n>\n> 0002-0005: Places that sort/unique/search OIDs, BlockNumbers and TIDs,\n> which can reuse a small set of typed functions (a few more could be\n> added, if useful). See sortitemptr.h and sortscalar.h. Mostly this\n> is just a notational improvement, and an excuse to drop a bunch of\n> duplicated code. In a few places this might really speed something\n> important up! Like VACUUM's lazy_tid_reaped().\n>\n> 0006-0009. Places where a specialised function is generated for one\n> special purpose, such as ANALYZE's HeapTuple sort, tidbitmap.c's\n> pagetable sort, some places in nbtree code etc. These may require\n> some case-by-case research on whether the extra executable size is\n> worth the speedup, and there are surely more opportunities like that;\n> I just picked on these arbitrarily.\n>\n\nHi,For 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :+ * Remove duplicates from an array.  Return the new size.+ */+ST_SCOPE size_t+ST_UNIQUE(ST_ELEMENT_TYPE *array,The array is supposed to be sorted, right ?The comment should mention this.CheersOn Sat, Mar 13, 2021 at 6:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, Mar 13, 2021 at 3:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Mar 12, 2021 at 7:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > I wish we had the same for bsearch... :)\n>\n> Glibc already has the definition of the traditional void-based\n> function in /usr/include/bits/stdlib-bsearch.h, so the generated code\n> when the compiler can see the comparator definition is already good in\n> eg lazy_tid_reaped() and eg some nbtree search routines.  We could\n> probably expose more trivial comparators in headers to get more of\n> that, and we could perhaps put our own bsearch definition in a header\n> for other platforms that didn't think of that...\n>\n> It might be worth doing type-safe macro templates as well, though (as\n> I already did in an earlier proposal[1]), just to have nice type safe\n> code though, not sure, I'm thinking about that...\n\nI remembered a very good reason to do this: the ability to do\nbranch-free comparators in more places by introducing optional wider\nresults.  That's good for TIDs (needs 49 bits), and places that want\nto \"reverse\" a traditional comparator (just doing -result on an int\ncomparator that might theoretically return INT_MIN requires at least\n33 bits).  So I rebased the relevant parts of my earlier version, and\nwent through and wrote a bunch of examples to demonstrate all this\nstuff actually working.\n\nThere are two categories of change in these patches:\n\n0002-0005: Places that sort/unique/search OIDs, BlockNumbers and TIDs,\nwhich can reuse a small set of typed functions (a few more could be\nadded, if useful).  See sortitemptr.h and sortscalar.h.  Mostly this\nis just a notational improvement, and an excuse to drop a bunch of\nduplicated code.  In a few places this might really speed something\nimportant up!  Like VACUUM's lazy_tid_reaped().\n\n0006-0009.  Places where a specialised function is generated for one\nspecial purpose, such as ANALYZE's HeapTuple sort, tidbitmap.c's\npagetable sort,  some places in nbtree code etc.  These may require\nsome case-by-case research on whether the extra executable size is\nworth the speedup, and there are surely more opportunities like that;\nI just picked on these arbitrarily.", "msg_date": "Sat, 13 Mar 2021 20:06:26 -0800", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Mar 14, 2021 at 5:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> For 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :\n>\n> + * Remove duplicates from an array. Return the new size.\n> + */\n> +ST_SCOPE size_t\n> +ST_UNIQUE(ST_ELEMENT_TYPE *array,\n>\n> The array is supposed to be sorted, right ?\n> The comment should mention this.\n\nGood point, will update. Thanks!\n\n\n", "msg_date": "Mon, 15 Mar 2021 13:09:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, Mar 15, 2021 at 1:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Mar 14, 2021 at 5:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + * Remove duplicates from an array. Return the new size.\n> > + */\n> > +ST_SCOPE size_t\n> > +ST_UNIQUE(ST_ELEMENT_TYPE *array,\n> >\n> > The array is supposed to be sorted, right ?\n> > The comment should mention this.\n>\n> Good point, will update. Thanks!\n\nRebased. Also fixed some formatting problems and updated\ntypedefs.list so they don't come back.", "msg_date": "Wed, 16 Jun 2021 17:54:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:55 PM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> On Mon, Mar 15, 2021 at 1:09 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Sun, Mar 14, 2021 at 5:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > > + * Remove duplicates from an array. Return the new size.\n> > > + */\n> > > +ST_SCOPE size_t\n> > > +ST_UNIQUE(ST_ELEMENT_TYPE *array,\n> > >\n> > > The array is supposed to be sorted, right ?\n> > > The comment should mention this.\n> >\n> > Good point, will update. Thanks!\n>\n> Rebased. Also fixed some formatting problems and updated\n> typedefs.list so they don't come back.\n>\n\nHi,\nIn 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :\n\n- const ST_ELEMENT_TYPE *\nST_SORT_PROTO_ARG);\n+ const ST_ELEMENT_TYPE\n*ST_SORT_PROTO_ARG);\n\nIt seems there is no real change in the line above. Better keep the\noriginal formation.\n\n * - ST_COMPARE_ARG_TYPE - type of extra argument\n *\n+ * To say that the comparator returns a type other than int, use:\n+ *\n+ * - ST_COMPARE_TYPE - an integer type\n\nSince the ST_COMPARE_TYPE is meant to designate the type of the return\nvalue, maybe ST_COMPARE_RET_TYPE would be better name.\nIt also goes with ST_COMPARE_ARG_TYPE preceding this.\n\n- ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data,\n- *pa,\n- *pb,\n- *pc,\n- *pd,\n- *pl,\n- *pm,\n- *pn;\n+ ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data;\n+ ST_POINTER_TYPE *pa;\n\nThere doesn't seem to be material change for the above hunk.\n\n+ while (left <= right)\n+ {\n+ size_t mid = (left + right) / 2;\n\nThe computation for midpoint should be left + (right-left)/2.\n\nCheers\n\nOn Tue, Jun 15, 2021 at 10:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Mar 15, 2021 at 1:09 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Mar 14, 2021 at 5:03 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > + * Remove duplicates from an array.  Return the new size.\n> > + */\n> > +ST_SCOPE size_t\n> > +ST_UNIQUE(ST_ELEMENT_TYPE *array,\n> >\n> > The array is supposed to be sorted, right ?\n> > The comment should mention this.\n>\n> Good point, will update.  Thanks!\n\nRebased.  Also fixed some formatting problems and updated\ntypedefs.list so they don't come back.Hi,In 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :-                                       const ST_ELEMENT_TYPE * ST_SORT_PROTO_ARG);+                                       const ST_ELEMENT_TYPE *ST_SORT_PROTO_ARG);It seems there is no real change in the line above. Better keep the original formation.  *   - ST_COMPARE_ARG_TYPE - type of extra argument  *+ *   To say that the comparator returns a type other than int, use:+ *+ *       - ST_COMPARE_TYPE - an integer typeSince the ST_COMPARE_TYPE is meant to designate the type of the return value, maybe ST_COMPARE_RET_TYPE would be better name.It also goes with ST_COMPARE_ARG_TYPE preceding this.-   ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data,-              *pa,-              *pb,-              *pc,-              *pd,-              *pl,-              *pm,-              *pn;+   ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data;+   ST_POINTER_TYPE *pa;There doesn't seem to be material change for the above hunk.+   while (left <= right)+   {+       size_t      mid = (left + right) / 2;The computation for midpoint should be left + (right-left)/2.Cheers", "msg_date": "Wed, 16 Jun 2021 13:18:16 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi Zhihong,\n\nOn Thu, Jun 17, 2021 at 8:13 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> In 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :\n>\n> - const ST_ELEMENT_TYPE * ST_SORT_PROTO_ARG);\n> + const ST_ELEMENT_TYPE *ST_SORT_PROTO_ARG);\n>\n> It seems there is no real change in the line above. Better keep the original formation.\n\nHmm, well it was only recently damaged by commit def5b065, and that's\nbecause I'd forgotten to put ST_ELEMENT_TYPE into typedefs.list, and I\nwas correcting that in this patch. (That file is used by\npg_bsd_indent to decide if an identifier is a type or a variable,\nwhich affects whether '*' is formatted like a unary operator/type\nsyntax or a binary operator.)\n\n> * - ST_COMPARE_ARG_TYPE - type of extra argument\n> *\n> + * To say that the comparator returns a type other than int, use:\n> + *\n> + * - ST_COMPARE_TYPE - an integer type\n>\n> Since the ST_COMPARE_TYPE is meant to designate the type of the return value, maybe ST_COMPARE_RET_TYPE would be better name.\n> It also goes with ST_COMPARE_ARG_TYPE preceding this.\n\nGood idea, will do.\n\n> - ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data,\n> - *pa,\n> - *pb,\n> - *pc,\n> - *pd,\n> - *pl,\n> - *pm,\n> - *pn;\n> + ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data;\n> + ST_POINTER_TYPE *pa;\n>\n> There doesn't seem to be material change for the above hunk.\n\nIn master, you can't write #define ST_ELEMENT_TYPE some_type *, which\nseems like it would be quite useful. You can use pointers as element\ntypes, but only with a typedef name due to C parsing rules. some_type\n**a, *pa, ... declares some_type *pa, but we want some_type **pa. I\ndon't want to have to introduce extra typedefs. The change fixes that\nproblem by not using C's squirrelly variable declaration list syntax.\n\n> + while (left <= right)\n> + {\n> + size_t mid = (left + right) / 2;\n>\n> The computation for midpoint should be left + (right-left)/2.\n\nRight, my way can overflow. Will fix. Thanks!\n\n\n", "msg_date": "Thu, 17 Jun 2021 09:54:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Wed, Jun 16, 2021 at 2:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hi Zhihong,\n>\n> On Thu, Jun 17, 2021 at 8:13 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > In 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :\n> >\n> > - const ST_ELEMENT_TYPE *\n> ST_SORT_PROTO_ARG);\n> > + const ST_ELEMENT_TYPE\n> *ST_SORT_PROTO_ARG);\n> >\n> > It seems there is no real change in the line above. Better keep the\n> original formation.\n>\n> Hmm, well it was only recently damaged by commit def5b065, and that's\n> because I'd forgotten to put ST_ELEMENT_TYPE into typedefs.list, and I\n> was correcting that in this patch. (That file is used by\n> pg_bsd_indent to decide if an identifier is a type or a variable,\n> which affects whether '*' is formatted like a unary operator/type\n> syntax or a binary operator.)\n>\n> > * - ST_COMPARE_ARG_TYPE - type of extra argument\n> > *\n> > + * To say that the comparator returns a type other than int, use:\n> > + *\n> > + * - ST_COMPARE_TYPE - an integer type\n> >\n> > Since the ST_COMPARE_TYPE is meant to designate the type of the return\n> value, maybe ST_COMPARE_RET_TYPE would be better name.\n> > It also goes with ST_COMPARE_ARG_TYPE preceding this.\n>\n> Good idea, will do.\n>\n> > - ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data,\n> > - *pa,\n> > - *pb,\n> > - *pc,\n> > - *pd,\n> > - *pl,\n> > - *pm,\n> > - *pn;\n> > + ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data;\n> > + ST_POINTER_TYPE *pa;\n> >\n> > There doesn't seem to be material change for the above hunk.\n>\n> In master, you can't write #define ST_ELEMENT_TYPE some_type *, which\n> seems like it would be quite useful. You can use pointers as element\n> types, but only with a typedef name due to C parsing rules. some_type\n> **a, *pa, ... declares some_type *pa, but we want some_type **pa. I\n> don't want to have to introduce extra typedefs. The change fixes that\n> problem by not using C's squirrelly variable declaration list syntax.\n>\n> > + while (left <= right)\n> > + {\n> > + size_t mid = (left + right) / 2;\n> >\n> > The computation for midpoint should be left + (right-left)/2.\n>\n> Right, my way can overflow. Will fix. Thanks!\n>\n\nHi,\nThanks for giving me background on typedefs.\nThe relevant changes look fine to me.\n\nCheers\n\nOn Wed, Jun 16, 2021 at 2:54 PM Thomas Munro <thomas.munro@gmail.com> wrote:Hi Zhihong,\n\nOn Thu, Jun 17, 2021 at 8:13 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> In 0001-Add-bsearch-and-unique-templates-to-sort_template.h.patch :\n>\n> -                                       const ST_ELEMENT_TYPE * ST_SORT_PROTO_ARG);\n> +                                       const ST_ELEMENT_TYPE *ST_SORT_PROTO_ARG);\n>\n> It seems there is no real change in the line above. Better keep the original formation.\n\nHmm, well it was only recently damaged by commit def5b065, and that's\nbecause I'd forgotten to put ST_ELEMENT_TYPE into typedefs.list, and I\nwas correcting that in this patch.  (That file is used by\npg_bsd_indent to decide if an identifier is a type or a variable,\nwhich affects whether '*' is formatted like a unary operator/type\nsyntax or a binary operator.)\n\n>   *   - ST_COMPARE_ARG_TYPE - type of extra argument\n>   *\n> + *   To say that the comparator returns a type other than int, use:\n> + *\n> + *       - ST_COMPARE_TYPE - an integer type\n>\n> Since the ST_COMPARE_TYPE is meant to designate the type of the return value, maybe ST_COMPARE_RET_TYPE would be better name.\n> It also goes with ST_COMPARE_ARG_TYPE preceding this.\n\nGood idea, will do.\n\n> -   ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data,\n> -              *pa,\n> -              *pb,\n> -              *pc,\n> -              *pd,\n> -              *pl,\n> -              *pm,\n> -              *pn;\n> +   ST_POINTER_TYPE *a = (ST_POINTER_TYPE *) data;\n> +   ST_POINTER_TYPE *pa;\n>\n> There doesn't seem to be material change for the above hunk.\n\nIn master, you can't write #define ST_ELEMENT_TYPE some_type *, which\nseems like it would be quite useful.  You can use pointers as element\ntypes, but only with a typedef name due to C parsing rules.  some_type\n**a, *pa, ... declares some_type *pa, but we want some_type **pa.  I\ndon't want to have to introduce extra typedefs.  The change fixes that\nproblem by not using C's squirrelly variable declaration list syntax.\n\n> +   while (left <= right)\n> +   {\n> +       size_t      mid = (left + right) / 2;\n>\n> The computation for midpoint should be left + (right-left)/2.\n\nRight, my way can overflow.  Will fix.  Thanks!Hi,Thanks for giving me background on typedefs.The relevant changes look fine to me.Cheers", "msg_date": "Wed, 16 Jun 2021 15:05:24 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Hmm, well it was only recently damaged by commit def5b065, and that's\n> because I'd forgotten to put ST_ELEMENT_TYPE into typedefs.list, and I\n> was correcting that in this patch.\n\nIf ST_ELEMENT_TYPE isn't recognized as a typedef by the buildfarm's\ntypedef collectors, this sort of manual addition to typedefs.list\nis not going to survive the next pgindent run. No, I will NOT\npromise to manually add it back every time.\n\nWe do already have special provision for injecting additional typedefs\nin the pgindent script, so one possibility is to add it there:\n\n-my @additional = (\"bool\\n\");\n+my @additional = (\"bool\\nST_ELEMENT_TYPE\\n\");\n\nOn the whole I'm not sure that this is a big enough formatting\nissue to justify a special hack, though. Is there any more than\nthe one line that gets misformatted?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 19:40:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Jun 17, 2021 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Hmm, well it was only recently damaged by commit def5b065, and that's\n> > because I'd forgotten to put ST_ELEMENT_TYPE into typedefs.list, and I\n> > was correcting that in this patch.\n>\n> If ST_ELEMENT_TYPE isn't recognized as a typedef by the buildfarm's\n> typedef collectors, this sort of manual addition to typedefs.list\n> is not going to survive the next pgindent run. No, I will NOT\n> promise to manually add it back every time.\n>\n> We do already have special provision for injecting additional typedefs\n> in the pgindent script, so one possibility is to add it there:\n>\n> -my @additional = (\"bool\\n\");\n> +my @additional = (\"bool\\nST_ELEMENT_TYPE\\n\");\n>\n> On the whole I'm not sure that this is a big enough formatting\n> issue to justify a special hack, though. Is there any more than\n> the one line that gets misformatted?\n\nOhh. In that case, I won't bother with that hunk and will live with\nthe extra space. There are several other lines like this in the tree,\nwhere people use caveman template macrology that is invisible to\nwhatever analyser is being used for that, and I can see that that's\njust going to have to be OK for now. Perhaps one day we could add a\nsecondary file, not updated by that mechanism, that holds a manually\nmaintained list for cases like this.\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:07:38 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Perhaps one day we could add a\n> secondary file, not updated by that mechanism, that holds a manually\n> maintained list for cases like this.\n\nYeah, the comments in pgindent already speculate about that. For\nnow, those include and exclude lists are short enough that keeping\nthem inside the script seems a lot easier than building tooling\nto get them from somewhere else.\n\nThe big problem in my mind, which would not be alleviated in the\nslightest by having a separate file, is that it'd be easy to miss\nremoving entries if they ever become obsolete.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 21:14:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Jun 17, 2021 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The big problem in my mind, which would not be alleviated in the\n> slightest by having a separate file, is that it'd be easy to miss\n> removing entries if they ever become obsolete.\n\nI suppose you could invent some kind of declaration syntax in a\ncomment near the use of the pseudo-typename in the source tree that is\nmechanically extracted.\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:20:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Wed, Jun 16, 2021 at 1:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n[v2 patch]\n\nHi Thomas,\n\nI plan to do some performance testing with VACUUM, ANALYZE etc soon, to see\nif I can detect any significant differences.\n\nI did a quick check of the MacOS/clang binary size (no debug symbols):\n\nmaster: 8108408\n0001-0009: 8125224\n\nLater, I'll drill down into the individual patches and see if anything\nstands out.\n\nThere were already some comments for v2 upthread about formatting and an\noverflow hazard, but I did find a few more things to ask about:\n\n- For my curiosity, there are a lot of calls to qsort/qunique in the tree\n-- without having looked exhaustively, do these patches focus on cases\nwhere there are bespoke comparator functions and/or hot code paths?\n\n- Aside from the qsort{_arg} precedence, is there a practical reason for\nkeeping the new global functions in their own files?\n\n- 0002 / 0004\n\n+/* Search and unique functions inline in header. */\n\nThe functions are pretty small, but is there some advantage for inlining\nthese?\n\n- 0003\n\n#include \"lib/qunique.h\" is not needed anymore.\n\nThis isn't quite relevant for the current patch perhaps, but I'm wondering\nwhy we don't already call bsearch for RelationHasSysCache() and\nRelationSupportsSysCache().\n\n- 0008\n\n+#define ST_COMPARE(a, b, cxt) \\\n+ DatumGetInt32(FunctionCall2Coll(&cxt->flinfo, cxt->collation, *a, *b))\n\nThis seems like a pretty heavyweight comparison, so I'm not sure inlining\nbuys us much, but it seems also there are fewer branches this way. I'll\ncome up with a test and see what happens.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jun 16, 2021 at 1:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:[v2 patch]Hi Thomas,I plan to do some performance testing with VACUUM, ANALYZE etc soon, to see if I can detect any significant differences.I did a quick check of the MacOS/clang binary size (no debug symbols):master:    81084080001-0009: 8125224Later, I'll drill down into the individual patches and see if anything stands out.There were already some comments for v2 upthread about formatting and an overflow hazard, but I did find a few more things to ask about:- For my curiosity, there are a lot of calls to qsort/qunique in the tree -- without having looked exhaustively, do these patches focus on cases where there are bespoke comparator functions and/or hot code paths?- Aside from the qsort{_arg} precedence, is there a practical reason for keeping the new global functions in their own files?- 0002 / 0004 +/* Search and unique functions inline in header. */The functions are pretty small, but is there some advantage for inlining these?- 0003#include \"lib/qunique.h\" is not needed anymore.This isn't quite relevant for the current patch perhaps, but I'm wondering why we don't already call bsearch for RelationHasSysCache() and RelationSupportsSysCache().- 0008+#define ST_COMPARE(a, b, cxt) \\+\tDatumGetInt32(FunctionCall2Coll(&cxt->flinfo, cxt->collation, *a, *b))This seems like a pretty heavyweight comparison, so I'm not sure inlining buys us much, but it seems also there are fewer branches this way. I'll come up with a test and see what happens.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 28 Jun 2021 15:13:06 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi John,\n\nOn Tue, Jun 29, 2021 at 7:13 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I plan to do some performance testing with VACUUM, ANALYZE etc soon, to see if I can detect any significant differences.\n\nThanks!\n\n> I did a quick check of the MacOS/clang binary size (no debug symbols):\n>\n> master: 8108408\n> 0001-0009: 8125224\n\nNot too bad.\n\n> Later, I'll drill down into the individual patches and see if anything stands out.\n>\n> There were already some comments for v2 upthread about formatting and an overflow hazard, but I did find a few more things to ask about:\n\nRight, here's an update with fixes discussed earlier with Zhihong and Tom:\n\n* COMPARE_TYPE -> COMPARE_RET_TYPE\n* quit fighting with pgindent (I will try to fix this problem generally later)\n* fix overflow hazard\n\n> - For my curiosity, there are a lot of calls to qsort/qunique in the tree -- without having looked exhaustively, do these patches focus on cases where there are bespoke comparator functions and/or hot code paths?\n\nPatches 0006-0009 are highly specialised for local usage by a single\nmodule, and require some kind of evidence that they're worth their\nbytes, and the onus is on me there of course -- but any ideas and\nfeedback are welcome. There are other opportunities like these, maybe\nbetter ones. That reminds me: I recently had a perf report from\nAndres that showed the qsort in compute_scalar_stats() as quite hot.\nThat's probably a good candidate, and is not yet done in the current\npatch set.\n\nThe lower numbered patches are all things that are reused in many\nplaces, and in my humble opinion improve the notation and type safety\nand code deduplication generally when working with common types\nItemPtr, BlockNumber, Oid, aside from any performance arguments. At\nleast the ItemPtr stuff *might* also speed something useful up.\n\nI tried to measure a speedup in vacuum, but so far I have not. I did\nlearn some things though: While doing that with an uncorrelated index\nand a lot of deleted tuples, I found that adding more\nmaintenance_work_mem doesn't help beyond a few MB, because then cache\nmisses dominate to the point where it's not better than doing multiple\npasses (and this is familiar to me from work on hash joins). If I\nturned on huge pages on Linux and set min_dynamic_shared_memory so\nthat the parallel DSM used by vacuum lives in huge pages, then\nparallel vacuum with a large maintenance_work_mem starts to do much\nbetter than non-parallel vacuum by improving the TLB misses (as with\nhash joins). I thought that was quite interesting! Perhaps\nbsearch_itemptr might help with correlated indexes with a lot of\ndeleted indexes (so not dominated by cache misses), though?\n\n(I wouldn't be suprised if someone comes up with a much better idea\nthan bsearch for that anyway... a few ideas have been suggested.)\n\n> - Aside from the qsort{_arg} precedence, is there a practical reason for keeping the new global functions in their own files?\n\nBetter idea for layout welcome. One thing I wondered while trying to\nfigure out where to put functions that operate on itemptr: why is\nitemptr_encode() in src/include/catalog/index.h?!\n\n> - 0002 / 0004\n>\n> +/* Search and unique functions inline in header. */\n>\n> The functions are pretty small, but is there some advantage for inlining these?\n\nGlibc's bsearch definition is already in a header for inlining (as is\nour qunique), so I thought I should preserve that characteristic on\nprinciple. I don't have any evidence though. Other libcs I looked at\ndidn't have bsearch in a header. So by doing this we make the\ngenerated code the same across platforms (all other relevant things\nbeing equal). I don't know if it really makes much difference,\nespecially since in this case the comparator and size would still be\ninlined if we defined it in the .c (unlike standard bsearch)...\nProbably only lazy_tid_reaped() calls it enough to potentially show\nany difference in a non-microbenchmark workload, if anything does.\n\n> - 0003\n>\n> #include \"lib/qunique.h\" is not needed anymore.\n\nFixed.\n\n> This isn't quite relevant for the current patch perhaps, but I'm wondering why we don't already call bsearch for RelationHasSysCache() and RelationSupportsSysCache().\n\nRight, I missed that. Done. Nice to delete some more code.\n\n> - 0008\n>\n> +#define ST_COMPARE(a, b, cxt) \\\n> + DatumGetInt32(FunctionCall2Coll(&cxt->flinfo, cxt->collation, *a, *b))\n>\n> This seems like a pretty heavyweight comparison, so I'm not sure inlining buys us much, but it seems also there are fewer branches this way. I'll come up with a test and see what happens.\n\nI will be very interested to see the results. Thanks!", "msg_date": "Tue, 29 Jun 2021 12:16:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I spotted a mistake in v3: I didn't rename ST_COMPARE_TYPE to\nST_COMPARE_RET_TYPE in the 0009 patch (well, I did, but forgot to\ncommit before I ran git format-patch). I won't send another tarball\njust for that, but will correct it next time.\n\n\n", "msg_date": "Tue, 29 Jun 2021 13:11:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Jun 29, 2021 at 1:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I spotted a mistake in v3: I didn't rename ST_COMPARE_TYPE to\n> ST_COMPARE_RET_TYPE in the 0009 patch (well, I did, but forgot to\n> commit before I ran git format-patch). I won't send another tarball\n> just for that, but will correct it next time.\n\nHere's a version that includes a rather hackish test module that you\nmight find useful to explore various weird effects. Testing sorting\nroutines is really hard, of course... there's a zillion parameters and\nthings you could do in the data and cache effects etc etc. One of the\nmain things that jumps out pretty clearly though with these simple\ntests is that sorting 6 byte ItemPointerData objects is *really slow*\ncompared to more natural object sizes (look at the times and the\nMEMORY values in the scripts). Another is that specialised sort\nfunctions are much faster than traditional qsort (being one of the\ngoals of this exercise). Sadly, the 64 bit comparison technique is\nnot looking too good in the output of this test.", "msg_date": "Tue, 29 Jun 2021 18:56:07 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Jun 29, 2021 at 2:56 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Here's a version that includes a rather hackish test module that you\n> might find useful to explore various weird effects. Testing sorting\n> routines is really hard, of course... there's a zillion parameters and\n> things you could do in the data and cache effects etc etc. One of the\n\nThat module is incredibly useful!\n\nYeah, while brushing up on recent findings on sorting, it's clear there's a\nhuge amount of options with different tradeoffs. I did see your tweet last\nyear about the \"small sort\" threshold that was tested on a VAX machine, but\nhadn't given it any thought til now. Looking around, I've seen quite a\nrange, always with the caveat of \"it depends\". A couple interesting\nvariations:\n\nGolang uses 12, with an extra tweak:\n\n// Do ShellSort pass with gap 6\n// It could be written in this simplified form cause b-a <= 12\nfor i := a + 6; i < b; i++ {\n if data.Less(i, i-6) {\n data.Swap(i, i-6)\n }\n}\ninsertionSort(data, a, b)\n\nAndrei Alexandrescu gave a couple talks discussing the small-sort part of\nquicksort, and demonstrated a ruthlessly-optimized make-heap +\nunguarded-insertion-sort, using a threshold of 256. He reported a 6%\nspeed-up sorting a million doubles, IIRC:\n\nvideo: https://www.youtube.com/watch?v=FJJTYQYB1JQ\nslides:\nhttps://github.com/CppCon/CppCon2019/blob/master/Presentations/speed_is_found_in_the_minds_of_people/speed_is_found_in_the_minds_of_people__andrei_alexandrescu__cppcon_2019.pdf\n\nThat might not be workable for us, but it's a fun talk.\n\n> main things that jumps out pretty clearly though with these simple\n> tests is that sorting 6 byte ItemPointerData objects is *really slow*\n> compared to more natural object sizes (look at the times and the\n> MEMORY values in the scripts). Another is that specialised sort\n> functions are much faster than traditional qsort (being one of the\n> goals of this exercise). Sadly, the 64 bit comparison technique is\n> not looking too good in the output of this test.\n\nOne of the points of the talk I linked to is \"if doing the sensible thing\nmakes things worse, try something silly instead\".\n\nAnyway, I'll play around with the scripts and see if something useful pops\nout.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jun 29, 2021 at 2:56 AM Thomas Munro <thomas.munro@gmail.com> wrote:> Here's a version that includes a rather hackish test module that you> might find useful to explore various weird effects.  Testing sorting> routines is really hard, of course... there's a zillion parameters and> things you could do in the data and cache effects etc etc.  One of theThat module is incredibly useful!Yeah, while brushing up on recent findings on sorting, it's clear there's a huge amount of options with different tradeoffs. I did see your tweet last year about the \"small sort\" threshold that was tested on a VAX machine, but hadn't given it any thought til now. Looking around, I've seen quite a range, always with the caveat of \"it depends\". A couple interesting variations:Golang uses 12, with an extra tweak:// Do ShellSort pass with gap 6// It could be written in this simplified form cause b-a <= 12for i := a + 6; i < b; i++ {    if data.Less(i, i-6) {        data.Swap(i, i-6)    }}insertionSort(data, a, b)Andrei Alexandrescu gave a couple talks discussing the small-sort part of quicksort, and demonstrated a ruthlessly-optimized make-heap + unguarded-insertion-sort, using a threshold of 256. He reported a 6% speed-up sorting a million doubles, IIRC:video: https://www.youtube.com/watch?v=FJJTYQYB1JQslides: https://github.com/CppCon/CppCon2019/blob/master/Presentations/speed_is_found_in_the_minds_of_people/speed_is_found_in_the_minds_of_people__andrei_alexandrescu__cppcon_2019.pdfThat might not be workable for us, but it's a fun talk. > main things that jumps out pretty clearly though with these simple> tests is that sorting 6 byte ItemPointerData objects is *really slow*> compared to more natural object sizes (look at the times and the> MEMORY values in the scripts).  Another is that specialised sort> functions are much faster than traditional qsort (being one of the> goals of this exercise).  Sadly, the 64 bit comparison technique is> not looking too good in the output of this test.One of the points of the talk I linked to is \"if doing the sensible thing makes things worse, try something silly instead\".Anyway, I'll play around with the scripts and see if something useful pops out.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 29 Jun 2021 12:40:53 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I wrote:\n\n> One of the points of the talk I linked to is \"if doing the sensible thing\nmakes things worse, try something silly instead\".\n\nFor item pointers, it made sense to try doing math to reduce the number of\nbranches. That made things worse, so let's try the opposite: Increase the\nnumber of branches so we do less math. In the attached patch (applies on\ntop of your 0012 and a .txt to avoid confusing the CF bot), I test a new\ncomparator with this approach, and also try a wider range of thresholds.\nThe thresholds don't seem to make any noticeable difference with this data\ntype, but the new comparator (cmp=ids below) gives a nice speedup in this\ntest:\n\n# SELECT test_sort_itemptr();\nNOTICE: [traditional qsort] order=random, threshold=7, cmp=32, test=0,\ntime=4.964657\nNOTICE: [traditional qsort] order=random, threshold=7, cmp=32, test=1,\ntime=5.185384\nNOTICE: [traditional qsort] order=random, threshold=7, cmp=32, test=2,\ntime=5.058179\nNOTICE: order=random, threshold=7, cmp=std, test=0, time=2.810627\nNOTICE: order=random, threshold=7, cmp=std, test=1, time=2.804940\nNOTICE: order=random, threshold=7, cmp=std, test=2, time=2.800677\nNOTICE: order=random, threshold=7, cmp=ids, test=0, time=1.692711\nNOTICE: order=random, threshold=7, cmp=ids, test=1, time=1.694546\nNOTICE: order=random, threshold=7, cmp=ids, test=2, time=1.692839\nNOTICE: order=random, threshold=12, cmp=std, test=0, time=2.687033\nNOTICE: order=random, threshold=12, cmp=std, test=1, time=2.681974\nNOTICE: order=random, threshold=12, cmp=std, test=2, time=2.687833\nNOTICE: order=random, threshold=12, cmp=ids, test=0, time=1.666418\nNOTICE: order=random, threshold=12, cmp=ids, test=1, time=1.666188\nNOTICE: order=random, threshold=12, cmp=ids, test=2, time=1.664176\nNOTICE: order=random, threshold=16, cmp=std, test=0, time=2.574147\nNOTICE: order=random, threshold=16, cmp=std, test=1, time=2.579981\nNOTICE: order=random, threshold=16, cmp=std, test=2, time=2.572861\nNOTICE: order=random, threshold=16, cmp=ids, test=0, time=1.699432\nNOTICE: order=random, threshold=16, cmp=ids, test=1, time=1.703075\nNOTICE: order=random, threshold=16, cmp=ids, test=2, time=1.697173\nNOTICE: order=random, threshold=32, cmp=std, test=0, time=2.750040\nNOTICE: order=random, threshold=32, cmp=std, test=1, time=2.744138\nNOTICE: order=random, threshold=32, cmp=std, test=2, time=2.748026\nNOTICE: order=random, threshold=32, cmp=ids, test=0, time=1.677414\nNOTICE: order=random, threshold=32, cmp=ids, test=1, time=1.683792\nNOTICE: order=random, threshold=32, cmp=ids, test=2, time=1.701309\nNOTICE: [traditional qsort] order=increasing, threshold=7, cmp=32, test=0,\ntime=2.543837\nNOTICE: [traditional qsort] order=increasing, threshold=7, cmp=32, test=1,\ntime=2.290497\nNOTICE: [traditional qsort] order=increasing, threshold=7, cmp=32, test=2,\ntime=2.262956\nNOTICE: order=increasing, threshold=7, cmp=std, test=0, time=1.033052\nNOTICE: order=increasing, threshold=7, cmp=std, test=1, time=1.032079\nNOTICE: order=increasing, threshold=7, cmp=std, test=2, time=1.041836\nNOTICE: order=increasing, threshold=7, cmp=ids, test=0, time=0.367355\nNOTICE: order=increasing, threshold=7, cmp=ids, test=1, time=0.367428\nNOTICE: order=increasing, threshold=7, cmp=ids, test=2, time=0.367384\nNOTICE: order=increasing, threshold=12, cmp=std, test=0, time=1.004991\nNOTICE: order=increasing, threshold=12, cmp=std, test=1, time=1.008045\nNOTICE: order=increasing, threshold=12, cmp=std, test=2, time=1.010778\nNOTICE: order=increasing, threshold=12, cmp=ids, test=0, time=0.370944\nNOTICE: order=increasing, threshold=12, cmp=ids, test=1, time=0.368669\nNOTICE: order=increasing, threshold=12, cmp=ids, test=2, time=0.370100\nNOTICE: order=increasing, threshold=16, cmp=std, test=0, time=1.023682\nNOTICE: order=increasing, threshold=16, cmp=std, test=1, time=1.025805\nNOTICE: order=increasing, threshold=16, cmp=std, test=2, time=1.022005\nNOTICE: order=increasing, threshold=16, cmp=ids, test=0, time=0.365398\nNOTICE: order=increasing, threshold=16, cmp=ids, test=1, time=0.365586\nNOTICE: order=increasing, threshold=16, cmp=ids, test=2, time=0.364807\nNOTICE: order=increasing, threshold=32, cmp=std, test=0, time=0.950780\nNOTICE: order=increasing, threshold=32, cmp=std, test=1, time=0.949920\nNOTICE: order=increasing, threshold=32, cmp=std, test=2, time=0.953239\nNOTICE: order=increasing, threshold=32, cmp=ids, test=0, time=0.367866\nNOTICE: order=increasing, threshold=32, cmp=ids, test=1, time=0.372179\nNOTICE: order=increasing, threshold=32, cmp=ids, test=2, time=0.371115\nNOTICE: [traditional qsort] order=decreasing, threshold=7, cmp=32, test=0,\ntime=2.317475\nNOTICE: [traditional qsort] order=decreasing, threshold=7, cmp=32, test=1,\ntime=2.323446\nNOTICE: [traditional qsort] order=decreasing, threshold=7, cmp=32, test=2,\ntime=2.326714\nNOTICE: order=decreasing, threshold=7, cmp=std, test=0, time=1.022270\nNOTICE: order=decreasing, threshold=7, cmp=std, test=1, time=1.015133\nNOTICE: order=decreasing, threshold=7, cmp=std, test=2, time=1.016367\nNOTICE: order=decreasing, threshold=7, cmp=ids, test=0, time=0.386884\nNOTICE: order=decreasing, threshold=7, cmp=ids, test=1, time=0.388397\nNOTICE: order=decreasing, threshold=7, cmp=ids, test=2, time=0.386328\nNOTICE: order=decreasing, threshold=12, cmp=std, test=0, time=0.993594\nNOTICE: order=decreasing, threshold=12, cmp=std, test=1, time=0.995031\nNOTICE: order=decreasing, threshold=12, cmp=std, test=2, time=0.995320\nNOTICE: order=decreasing, threshold=12, cmp=ids, test=0, time=0.391243\nNOTICE: order=decreasing, threshold=12, cmp=ids, test=1, time=0.391938\nNOTICE: order=decreasing, threshold=12, cmp=ids, test=2, time=0.392478\nNOTICE: order=decreasing, threshold=16, cmp=std, test=0, time=1.006240\nNOTICE: order=decreasing, threshold=16, cmp=std, test=1, time=1.009817\nNOTICE: order=decreasing, threshold=16, cmp=std, test=2, time=1.010281\nNOTICE: order=decreasing, threshold=16, cmp=ids, test=0, time=0.386388\nNOTICE: order=decreasing, threshold=16, cmp=ids, test=1, time=0.385801\nNOTICE: order=decreasing, threshold=16, cmp=ids, test=2, time=0.384484\nNOTICE: order=decreasing, threshold=32, cmp=std, test=0, time=0.959647\nNOTICE: order=decreasing, threshold=32, cmp=std, test=1, time=0.958833\nNOTICE: order=decreasing, threshold=32, cmp=std, test=2, time=0.960234\nNOTICE: order=decreasing, threshold=32, cmp=ids, test=0, time=0.403014\nNOTICE: order=decreasing, threshold=32, cmp=ids, test=1, time=0.393329\nNOTICE: order=decreasing, threshold=32, cmp=ids, test=2, time=0.395659\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Jul 2021 12:39:32 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Jul 2, 2021 at 4:39 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n> For item pointers, it made sense to try doing math to reduce the number of branches. That made things worse, so let's try the opposite: Increase the number of branches so we do less math. In the attached patch (applies on top of your 0012 and a .txt to avoid confusing the CF bot), I test a new comparator with this approach, and also try a wider range of thresholds. The thresholds don't seem to make any noticeable difference with this data type, but the new comparator (cmp=ids below) gives a nice speedup in this test:\n\n> NOTICE: [traditional qsort] order=random, threshold=7, cmp=32, test=0, time=4.964657\n\n> NOTICE: order=random, threshold=7, cmp=std, test=0, time=2.810627\n\n> NOTICE: order=random, threshold=7, cmp=ids, test=0, time=1.692711\n\nOooh. So, the awkwardness of the 64 maths with unaligned inputs (even\nthough we obtain all inputs with 16 bit loads) was hurting, and you\nrealised the same sort of thing might be happening also with the 32\nbit version and went the other way. (It'd be nice to understand\nexactly why.)\n\nI tried your 16 bit comparison version on Intel, AMD and Apple CPUs\nand the results were all in the same ballpark. For random input, I\nsee something like ~1.7x speedup over traditional qsort from\nspecialising (cmp=std), and ~2.7x from going 16 bit (cmp=ids). For\nincreasing and decreasing input, it's ~2x speedup from specialising\nand ~4x speedup from going 16 bit. Beautiful.\n\nOne thing I'm wondering about is whether it's worth having stuff to\nsupport future experimentation like ST_SORT_SMALL_THRESHOLD and\nST_COMPARE_RET_TYPE in the tree, or whether we should pare it back to\nthe minimal changes that definitely produce results. I think I'd like\nto keep those changes: even if it may be some time, possibly an\ninfinite amount, before we figure out how to tune the thresholds\nprofitably, giving them names instead of using magic numbers seems\nlike progress.\n\nThe Alexandrescu talk was extremely entertaining, thanks.\n\n\n", "msg_date": "Fri, 2 Jul 2021 10:09:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Jul 1, 2021 at 6:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> One thing I'm wondering about is whether it's worth having stuff to\n> support future experimentation like ST_SORT_SMALL_THRESHOLD and\n> ST_COMPARE_RET_TYPE in the tree, or whether we should pare it back to\n> the minimal changes that definitely produce results. I think I'd like\n> to keep those changes: even if it may be some time, possibly an\n> infinite amount, before we figure out how to tune the thresholds\n> profitably, giving them names instead of using magic numbers seems\n> like progress.\n\nI suspect if we experiment on two extremes of type \"heaviness\" (accessing\nand comparing trivial or not), such as uint32 and tuplesort, we'll have a\npretty good idea what the parameters should be, if anything different. I'll\ndo some testing along those lines.\n\n(BTW, I just realized I lied and sent a .patch file after all, oops)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 1, 2021 at 6:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:> One thing I'm wondering about is whether it's worth having stuff to> support future experimentation like ST_SORT_SMALL_THRESHOLD and> ST_COMPARE_RET_TYPE in the tree, or whether we should pare it back to> the minimal changes that definitely produce results.  I think I'd like> to keep those changes: even if it may be some time, possibly an> infinite amount, before we figure out how to tune the thresholds> profitably, giving them names instead of using magic numbers seems> like progress.I suspect if we experiment on two extremes of type \"heaviness\" (accessing and comparing trivial or not), such as uint32 and tuplesort, we'll have a pretty good idea what the parameters should be, if anything different. I'll do some testing along those lines.(BTW, I just realized I lied and sent a .patch file after all, oops)--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Jul 2021 22:32:31 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Jul 2, 2021 at 2:32 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> I suspect if we experiment on two extremes of type \"heaviness\" (accessing and comparing trivial or not), such as uint32 and tuplesort, we'll have a pretty good idea what the parameters should be, if anything different. I'll do some testing along those lines.\n\nCool.\n\nSince you are experimenting with tuplesort and likely thinking similar\nthoughts, here's a patch I've been using to explore that area. I've\nseen it get, for example, ~1.18x speedup for simple index builds in\nfavourable winds (YMMV, early hacking results only). Currently, it\nkicks in when the leading column is of type int4, int8, timestamp,\ntimestamptz, date or text + friends (when abbreviatable, currently\nthat means \"C\" and ICU collations only), while increasing the\nexecutable by only 8.5kB (Clang, amd64, -O2, no debug).\n\nThese types are handled with just three specialisations. Their custom\n\"fast\" comparators all boiled down to comparisons of datum bits,\nvarying only in signedness and width, so I tried throwing them away\nand using 3 new common routines. Then I extended\ntuplesort_sort_memtuples()'s pre-existing specialisation dispatch to\nrecognise qualifying users of those and select 3 corresponding sort\nspecialisations.\n\nIt might turn out to be worth burning some more executable size on\nextra variants (for example, see XXX notes in the code comments for\nopportunities; one could also go nuts trying smaller things like\nspecial cases for not-null, nulls first, reverse sort, ... to kill all\nthose branches), or not.", "msg_date": "Sun, 4 Jul 2021 16:27:21 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Jul 4, 2021 at 9:58 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Jul 2, 2021 at 2:32 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> > I suspect if we experiment on two extremes of type \"heaviness\" (accessing and comparing trivial or not), such as uint32 and tuplesort, we'll have a pretty good idea what the parameters should be, if anything different. I'll do some testing along those lines.\n>\n> Cool.\n>\n> Since you are experimenting with tuplesort and likely thinking similar\n> thoughts, here's a patch I've been using to explore that area. I've\n> seen it get, for example, ~1.18x speedup for simple index builds in\n> favourable winds (YMMV, early hacking results only). Currently, it\n> kicks in when the leading column is of type int4, int8, timestamp,\n> timestamptz, date or text + friends (when abbreviatable, currently\n> that means \"C\" and ICU collations only), while increasing the\n> executable by only 8.5kB (Clang, amd64, -O2, no debug).\n>\n> These types are handled with just three specialisations. Their custom\n> \"fast\" comparators all boiled down to comparisons of datum bits,\n> varying only in signedness and width, so I tried throwing them away\n> and using 3 new common routines. Then I extended\n> tuplesort_sort_memtuples()'s pre-existing specialisation dispatch to\n> recognise qualifying users of those and select 3 corresponding sort\n> specialisations.\n>\n> It might turn out to be worth burning some more executable size on\n> extra variants (for example, see XXX notes in the code comments for\n> opportunities; one could also go nuts trying smaller things like\n> special cases for not-null, nulls first, reverse sort, ... to kill all\n> those branches), or not.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 15 Jul 2021 17:19:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Jul 15, 2021 at 7:50 AM vignesh C <vignesh21@gmail.com> wrote:\n> The patch does not apply on Head anymore, could you rebase and post a\n> patch. I'm changing the status to \"Waiting for Author\".\n\nThe patch set is fine. The error is my fault since I attached an\nexperimental addendum and neglected to name it as .txt. I've set it back to\n\"needs review\" and will resume testing shortly.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jul 15, 2021 at 7:50 AM vignesh C <vignesh21@gmail.com> wrote:> The patch does not apply on Head anymore, could you rebase and post a> patch. I'm changing the status to \"Waiting for Author\".The patch set is fine. The error is my fault since I attached an experimental addendum and neglected to name it as .txt. I've set it back to \"needs review\" and will resume testing shortly.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Thu, 15 Jul 2021 07:57:54 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Jun 17, 2021 at 1:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jun 17, 2021 at 1:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The big problem in my mind, which would not be alleviated in the\n> > slightest by having a separate file, is that it'd be easy to miss\n> > removing entries if they ever become obsolete.\n>\n> I suppose you could invent some kind of declaration syntax in a\n> comment near the use of the pseudo-typename in the source tree that is\n> mechanically extracted.\n\nWhat do you think about something like this?", "msg_date": "Thu, 22 Jul 2021 19:30:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Jul 4, 2021 at 12:27 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Since you are experimenting with tuplesort and likely thinking similar\n> thoughts, here's a patch I've been using to explore that area. I've\n> seen it get, for example, ~1.18x speedup for simple index builds in\n> favourable winds (YMMV, early hacking results only). Currently, it\n> kicks in when the leading column is of type int4, int8, timestamp,\n> timestamptz, date or text + friends (when abbreviatable, currently\n> that means \"C\" and ICU collations only), while increasing the\n> executable by only 8.5kB (Clang, amd64, -O2, no debug).\n>\n> These types are handled with just three specialisations. Their custom\n> \"fast\" comparators all boiled down to comparisons of datum bits,\n> varying only in signedness and width, so I tried throwing them away\n> and using 3 new common routines. Then I extended\n> tuplesort_sort_memtuples()'s pre-existing specialisation dispatch to\n> recognise qualifying users of those and select 3 corresponding sort\n> specialisations.\n\nI got around to getting a benchmark together to serve as a starting point.\nI based it off something I got from the archives, but don't remember where\n(I seem to remember Tomas Vondra wrote the original, but not sure). To\nstart I just used types that were there already -- int, text, numeric. The\nlatter two won't be helped by this patch, but I wanted to keep something\nlike that so we can see what kind of noise variation there is. I'll\nprobably cut text out in the future and just keep numeric for that purpose.\n\nI've attached both the script and a crude spreadsheet. I'll try to figure\nout something nicer for future tests, and maybe some graphs. The\n\"comparison\" sheet has the results side by side (min of five). There are 6\ndistributions of values:\n- random\n- sorted\n- \"almost sorted\"\n- reversed\n- organ pipe (first half ascending, second half descending)\n- rotated (sorted but then put the smallest at the end)\n- random 0s/1s\n\nI included both \"select a\" and \"select *\" to make sure we have the recent\ndatum sort optimization represented. The results look pretty good for ints\n-- about the same speed up master gets going from tuple sorts to datum\nsorts, and those got faster in turn also.\n\nNext I think I'll run microbenchmarks on int64s with the test harness you\nattached earlier, and experiment with the qsort parameters a bit.\n\nI'm also attaching your tuplesort patch so others can see what exactly I'm\ncomparing.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 29 Jul 2021 20:34:01 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Jul 30, 2021 at 3:34 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I'm also attaching your tuplesort patch so others can see what exactly I'm comparing.\n\nIf you're going to specialize the sort routine for unsigned integer\nstyle abbreviated keys then you might as well cover all relevant\nopclasses/types. Almost all abbreviated key schemes produce\nconditioned datums that are designed to use simple 3-way unsigned int\ncomparator. It's not just text. (Actually, the only abbreviated key\nscheme that doesn't do it that way is numeric.)\n\nOffhand I know that UUID, macaddr, and inet all have abbreviated keys\nthat can use your new ssup_datum_binary_cmp() comparator instead of\ntheir own duplicated comparator (which will make them use the\ncorresponding specialized sort routine inside tuplesort.c).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Jul 2021 10:10:49 +0300", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Em qui., 29 de jul. de 2021 às 21:34, John Naylor <\njohn.naylor@enterprisedb.com> escreveu:\n\n>\n> On Sun, Jul 4, 2021 at 12:27 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >\n> > Since you are experimenting with tuplesort and likely thinking similar\n> > thoughts, here's a patch I've been using to explore that area. I've\n> > seen it get, for example, ~1.18x speedup for simple index builds in\n> > favourable winds (YMMV, early hacking results only). Currently, it\n> > kicks in when the leading column is of type int4, int8, timestamp,\n> > timestamptz, date or text + friends (when abbreviatable, currently\n> > that means \"C\" and ICU collations only), while increasing the\n> > executable by only 8.5kB (Clang, amd64, -O2, no debug).\n> >\n> > These types are handled with just three specialisations. Their custom\n> > \"fast\" comparators all boiled down to comparisons of datum bits,\n> > varying only in signedness and width, so I tried throwing them away\n> > and using 3 new common routines. Then I extended\n> > tuplesort_sort_memtuples()'s pre-existing specialisation dispatch to\n> > recognise qualifying users of those and select 3 corresponding sort\n> > specialisations.\n>\n> I got around to getting a benchmark together to serve as a starting point.\n> I based it off something I got from the archives, but don't remember where\n> (I seem to remember Tomas Vondra wrote the original, but not sure). To\n> start I just used types that were there already -- int, text, numeric. The\n> latter two won't be helped by this patch, but I wanted to keep something\n> like that so we can see what kind of noise variation there is. I'll\n> probably cut text out in the future and just keep numeric for that purpose.\n>\n> I've attached both the script and a crude spreadsheet. I'll try to figure\n> out something nicer for future tests, and maybe some graphs. The\n> \"comparison\" sheet has the results side by side (min of five). There are 6\n> distributions of values:\n> - random\n> - sorted\n> - \"almost sorted\"\n> - reversed\n> - organ pipe (first half ascending, second half descending)\n> - rotated (sorted but then put the smallest at the end)\n> - random 0s/1s\n>\n> I included both \"select a\" and \"select *\" to make sure we have the recent\n> datum sort optimization represented. The results look pretty good for ints\n> -- about the same speed up master gets going from tuple sorts to datum\n> sorts, and those got faster in turn also.\n>\n> Next I think I'll run microbenchmarks on int64s with the test harness you\n> attached earlier, and experiment with the qsort parameters a bit.\n>\n> I'm also attaching your tuplesort patch so others can see what exactly I'm\n> comparing.\n>\nThe patch attached does not apply cleanly,\nplease can fix it?\n\nerror: patch failed: src/backend/utils/sort/tuplesort.c:4776\nerror: src/backend/utils/sort/tuplesort.c: patch does not apply\n\nregards,\nRanier Vilela\n\nEm qui., 29 de jul. de 2021 às 21:34, John Naylor <john.naylor@enterprisedb.com> escreveu:On Sun, Jul 4, 2021 at 12:27 AM Thomas Munro <thomas.munro@gmail.com> wrote:>> Since you are experimenting with tuplesort and likely thinking similar> thoughts, here's a patch I've been using to explore that area.  I've> seen it get, for example, ~1.18x speedup for simple index builds in> favourable winds (YMMV, early hacking results only).  Currently, it> kicks in when the leading column is of type int4, int8, timestamp,> timestamptz, date or text + friends (when abbreviatable, currently> that means \"C\" and ICU collations only), while increasing the> executable by only 8.5kB (Clang, amd64, -O2, no debug).>> These types are handled with just three specialisations.  Their custom> \"fast\" comparators all boiled down to comparisons of datum bits,> varying only in signedness and width, so I tried throwing them away> and using 3 new common routines.  Then I extended> tuplesort_sort_memtuples()'s pre-existing specialisation dispatch to> recognise qualifying users of those and select 3 corresponding sort> specialisations.I got around to getting a benchmark together to serve as a starting point. I based it off something I got from the archives, but don't remember where (I seem to remember Tomas Vondra wrote the original, but not sure). To start I just used types that were there already -- int, text, numeric. The latter two won't be helped by this patch, but I wanted to keep something like that so we can see what kind of noise variation there is. I'll probably cut text out in the future and just keep numeric for that purpose.I've attached both the script and a crude spreadsheet. I'll try to figure out something nicer for future tests, and maybe some graphs. The \"comparison\" sheet has the results side by side (min of five). There are 6 distributions of values:- random- sorted- \"almost sorted\"- reversed- organ pipe (first half ascending, second half descending)- rotated (sorted but then put the smallest at the end)- random 0s/1sI included both \"select a\" and \"select *\" to make sure we have the recent datum sort optimization represented. The results look pretty good for ints -- about the same speed up master gets going from tuple sorts to datum sorts, and those got faster in turn also.Next I think I'll run microbenchmarks on int64s with the test harness you attached earlier, and experiment with the qsort parameters a bit.I'm also attaching your tuplesort patch so others can see what exactly I'm comparing.The patch attached does not apply cleanly,please can fix it?error: patch failed: src/backend/utils/sort/tuplesort.c:4776error: src/backend/utils/sort/tuplesort.c: patch does not applyregards,Ranier Vilela", "msg_date": "Fri, 30 Jul 2021 08:47:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Jul 30, 2021 at 7:47 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> The patch attached does not apply cleanly,\n> please can fix it?\n\nIt applies just fine with \"patch\", for those wondering.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Jul 30, 2021 at 7:47 AM Ranier Vilela <ranier.vf@gmail.com> wrote:> The patch attached does not apply cleanly,> please can fix it?It applies just fine with \"patch\", for those wondering.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 30 Jul 2021 10:53:58 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Jul 30, 2021 at 7:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you're going to specialize the sort routine for unsigned integer\n> style abbreviated keys then you might as well cover all relevant\n> opclasses/types. Almost all abbreviated key schemes produce\n> conditioned datums that are designed to use simple 3-way unsigned int\n> comparator. It's not just text. (Actually, the only abbreviated key\n> scheme that doesn't do it that way is numeric.)\n\nRight, that was the plan, but this was just experimenting with an\nidea. Looks like John's also seeing evidence that it may be worth\npursuing.\n\n(Re numeric, I guess it must be possible to rearrange things so it can\nuse ssup_datum_signed_cmp; maybe something like NaN -> INT64_MAX, +inf\n-> INT64_MAX - 1, -inf -> INT64_MIN, and then -1 - (whatever we're\ndoing now for normal values).)\n\n> Offhand I know that UUID, macaddr, and inet all have abbreviated keys\n> that can use your new ssup_datum_binary_cmp() comparator instead of\n> their own duplicated comparator (which will make them use the\n> corresponding specialized sort routine inside tuplesort.c).\n\nThanks, I've added these ones, and also gist_bbox_zorder_cmp_abbrev.\n\nI also renamed that function to ssup_datum_unsigned_cmp(), because\n\"binary\" was misleading.", "msg_date": "Mon, 2 Aug 2021 12:01:33 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Jul 30, 2021 at 12:34 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I got around to getting a benchmark together to serve as a starting point. I based it off something I got from the archives, but don't remember where (I seem to remember Tomas Vondra wrote the original, but not sure). To start I just used types that were there already -- int, text, numeric. The latter two won't be helped by this patch, but I wanted to keep something like that so we can see what kind of noise variation there is. I'll probably cut text out in the future and just keep numeric for that purpose.\n\nThanks, that's very useful.\n\n> I've attached both the script and a crude spreadsheet. I'll try to figure out something nicer for future tests, and maybe some graphs. The \"comparison\" sheet has the results side by side (min of five). There are 6 distributions of values:\n> - random\n> - sorted\n> - \"almost sorted\"\n> - reversed\n> - organ pipe (first half ascending, second half descending)\n> - rotated (sorted but then put the smallest at the end)\n> - random 0s/1s\n>\n> I included both \"select a\" and \"select *\" to make sure we have the recent datum sort optimization represented. The results look pretty good for ints -- about the same speed up master gets going from tuple sorts to datum sorts, and those got faster in turn also.\n\nGreat! I saw similar sorts of numbers. It's really just a few\ncrumbs, nothing compared to the gains David just found over in the\nthread \"Use generation context to speed up tuplesorts\", but on the\nbright side, these crumbs will be magnified by that work.\n\n> Next I think I'll run microbenchmarks on int64s with the test harness you attached earlier, and experiment with the qsort parameters a bit.\n\nCool. I haven't had much luck experimenting with that yet, though I\nconsider the promotion from magic numbers to names as an improvement\nin any case.\n\n> I'm also attaching your tuplesort patch so others can see what exactly I'm comparing.\n\nWe've been bouncing around quite a few different ideas and patches in\nthis thread; soon I'll try to bring it back to one patch set with the\nideas that are looking good so far in a more tidied up form. For the\ntupesort.c part, I added some TODO notes in\nv3-0001-WIP-Accelerate-tuple-sorting-for-common-types.patch's commit\nmessage (see reply to Peter).\n\n\n", "msg_date": "Mon, 2 Aug 2021 12:40:32 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, Aug 2, 2021 at 12:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Great! I saw similar sorts of numbers. It's really just a few\n> crumbs, nothing compared to the gains David just found over in the\n> thread \"Use generation context to speed up tuplesorts\", but on the\n> bright side, these crumbs will be magnified by that work.\n\n(Hmm, that also makes me wonder about using a smaller SortTuple when\npossible...)\n\n\n", "msg_date": "Mon, 2 Aug 2021 12:42:54 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Aug 1, 2021 at 5:41 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jul 30, 2021 at 12:34 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > I got around to getting a benchmark together to serve as a starting point. I based it off something I got from the archives, but don't remember where (I seem to remember Tomas Vondra wrote the original, but not sure). To start I just used types that were there already -- int, text, numeric. The latter two won't be helped by this patch, but I wanted to keep something like that so we can see what kind of noise variation there is. I'll probably cut text out in the future and just keep numeric for that purpose.\n>\n> Thanks, that's very useful.\n\nIf somebody wants to get a sense of what the size hit is from all of\nthese specializations, I can recommend the diff feature of bloaty:\n\nhttps://github.com/google/bloaty/blob/master/doc/using.md#size-diffs\n\nObviously you'd approach this by building postgres without the patch,\nand diffing that baseline to postgres with the patch. And possibly\nvariations of the patch, with less or more sort specializations.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 5 Aug 2021 16:18:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, Jun 28, 2021 at 8:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n[v4 patchset]\n\nHi Thomas,\n\n(Sorry for the delay -- I have some time to put into this now.)\n\n> The lower numbered patches are all things that are reused in many\n> places, and in my humble opinion improve the notation and type safety\n> and code deduplication generally when working with common types\n\nI think 0001-0003 have had enough review previously to commit them, as\nthey are mostly notational. There's a small amount of bitrot, but not\nenough to change the conclusions any. Also 0011 with the missing\n#undef.\n\nOn Thu, Aug 5, 2021 at 7:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> If somebody wants to get a sense of what the size hit is from all of\n> these specializations, I can recommend the diff feature of bloaty:\n>\n> https://github.com/google/bloaty/blob/master/doc/using.md#size-diffs\n>\n> Obviously you'd approach this by building postgres without the patch,\n> and diffing that baseline to postgres with the patch. And possibly\n> variations of the patch, with less or more sort specializations.\n\nThanks, that's a neat feature! For 0001-0003, the diff shows +700\nbytes in memory, so pretty small:\n\n$ bloaty -s vm src/backend/postgres -- src/backend/postgres.orig\n FILE SIZE VM SIZE\n -------------- --------------\n +0.0% +608 +0.0% +608 .text\n +0.0% +64 +0.0% +64 .eh_frame\n +0.0% +24 +0.0% +24 .dynsym\n +0.0% +14 +0.0% +14 .dynstr\n +0.0% +2 +0.0% +2 .gnu.version\n +0.0% +58 [ = ] 0 .debug_abbrev\n +0.1% +48 [ = ] 0 .debug_aranges\n +0.0% +1.65Ki [ = ] 0 .debug_info\n +0.0% +942 [ = ] 0 .debug_line\n +0.1% +26 [ = ] 0 .debug_line_str\n +0.0% +333 [ = ] 0 .debug_loclists\n -0.0% -23 [ = ] 0 .debug_rnglists\n +0.0% +73 [ = ] 0 .debug_str\n -1.0% -4 [ = ] 0 .shstrtab\n +0.0% +20 [ = ] 0 .strtab\n +0.0% +24 [ = ] 0 .symtab\n +131% +3.30Ki [ = ] 0 [Unmapped]\n +0.0% +7.11Ki +0.0% +712 TOTAL\n\n[back to Thomas]\n\n> I tried to measure a speedup in vacuum, but so far I have not. I did\n> learn some things though: While doing that with an uncorrelated index\n> and a lot of deleted tuples, I found that adding more\n> maintenance_work_mem doesn't help beyond a few MB, because then cache\n> misses dominate to the point where it's not better than doing multiple\n> passes (and this is familiar to me from work on hash joins). If I\n> turned on huge pages on Linux and set min_dynamic_shared_memory so\n> that the parallel DSM used by vacuum lives in huge pages, then\n> parallel vacuum with a large maintenance_work_mem starts to do much\n> better than non-parallel vacuum by improving the TLB misses (as with\n> hash joins). I thought that was quite interesting! Perhaps\n> bsearch_itemptr might help with correlated indexes with a lot of\n> deleted indexes (so not dominated by cache misses), though?\n>\n> (I wouldn't be suprised if someone comes up with a much better idea\n> than bsearch for that anyway... a few ideas have been suggested.)\n\nThat's interesting about the (un)correlated index having such a large\neffect on cache hit rate! By now there has been some discussion and a\nbenchmark for dead tuple storage [1]. bit there doesn't seem to be\nrecent activity on that thread. We might consider adding the ItemPtr\ncomparator work I did in [2] for v15 if we don't have any of the other\nproposals in place by feature freeze. My concern there is the speedups\nI observed were observed when the values were comfortably in L2 cache,\nIIRC. That would need wider testing.\n\nThat said, I think what I'll do next is test the v3-0001 standalone\npatch with tuplesort specializations for more data types. I already\nhave a decent test script that I can build on for this. (this is the\none currently in CI)\n\nThen, I want to do at least preliminary testing of the qsort boundary\nparameters.\n\nThose two things should be doable for this commitfest.\n\n[1] https://www.postgresql.org/message-id/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAFBsxsG_c24CHKA3cWrOP1HynWGLOkLb8hyZfsD9db5g-ZPagA%40mail.gmail.com\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jan 2022 17:33:17 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I wrote:\n\n> That said, I think what I'll do next is test the v3-0001 standalone\n> patch with tuplesort specializations for more data types. I already\n> have a decent test script that I can build on for this.\n\nI've run a test with 10 million records using all types found in the\nv3 patch \"accelerate tuple sorting for common types\", using a variety\nof initial orderings, covering index build (btree only, no gist) and\nqueries (both single value and whole record). Attached is the test\nscript and a spreadsheet with the raw data as well as comparisons of\nthe min runtimes in seconds from 5 runs. This is using gcc 11.1 on\nfairly recent Intel hardware.\n\nOverall, this shows a good improvement for these types. One exception\nis the \"0/1\" ordering, which is two values in random order. I'm\nguessing it's because of the cardinality detector, but some runs have\napparent possible regressions. It's a bit high and sporadic to just\nblow off as noise, but this case might not be telling us anything\nuseful.\n\nOther notes:\n\n- The inet type seems unnaturally fast in some places, meaning faster\nthan int or date. That's suspicous, but I haven't yet dug deeper into\nthat.\n\n- With the patch, the VM binary size increases by ~9kB.\n\nI have some hunches on the \"future research\" comments:\n\nXXX Can we avoid repeating the null-handling logic?\n\nMore templating? ;-)\n\nXXX Is it worth specializing for reverse sort?\n\nI'll run a limited test on DESC to see if anything stands out, but I\nwonder if the use case is not common -- I seem to remember seeing DESC\nless often on the first sort key column.\n\nXXX Is it worth specializing for nulls first, nulls last, not null?\n\nEditorializing the null position in queries is not very common in my\nexperience. Not null is interesting since it'd be trivial to pass\nconstant false to the same Apply[XYZ]SortComparator() and let the\ncompiler remove all those branches for us. On the other hand, those\nbranches would be otherwise predicted well, so it might make little or\nno difference.\n\nXXX Should we have separate cases for \"result is authoritative\", \"need\nXXX tiebreaker for atts 1..n (= abbrev case)\", \"need tie breaker for\nXXX atts 2..n\"?\n\nThe first one seems to be the only case where the SortTuple could be\nsmaller, since the tuple pointer is null. That sounds like a good\navenue to explore. Less memory usage is always good.\n\nNot sure what you mean by the third case -- there are 2+ sort keys,\nbut the first is authoritative from the datum, so the full comparison\ncan skip the first key?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 18 Jan 2022 21:39:21 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Jan 18, 2022 at 6:39 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Editorializing the null position in queries is not very common in my\n> experience. Not null is interesting since it'd be trivial to pass\n> constant false to the same Apply[XYZ]SortComparator() and let the\n> compiler remove all those branches for us. On the other hand, those\n> branches would be otherwise predicted well, so it might make little or\n> no difference.\n\nIf you were going to do this, maybe you could encode NULL directly in\nan abbreviated key. I think that that could be made to work if it was\nlimited to opclasses with abbreviated keys encoded as unsigned\nintegers. Just a thought.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 18 Jan 2022 18:57:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Jan 18, 2022 at 9:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 18, 2022 at 6:39 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > Editorializing the null position in queries is not very common in my\n> > experience. Not null is interesting since it'd be trivial to pass\n> > constant false to the same Apply[XYZ]SortComparator() and let the\n> > compiler remove all those branches for us. On the other hand, those\n> > branches would be otherwise predicted well, so it might make little or\n> > no difference.\n>\n> If you were going to do this, maybe you could encode NULL directly in\n> an abbreviated key. I think that that could be made to work if it was\n> limited to opclasses with abbreviated keys encoded as unsigned\n> integers. Just a thought.\n\nNow that you mention that, I do remember reading about this technique\nin the context of b-tree access, so it does make sense. If we had that\ncapability, it would be trivial to order the nulls how we want while\nbuilding the sort tuple datums, and the not-null case would be handled\nautomatically. And have a smaller code footprint, I think.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jan 2022 11:08:51 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nI've run a few tests to get some feel for the effects of various\ncomparators on Datums containing int32. I've attached the full\nresults, as well as the (messy) patch which applies on top of 0012 to\nrun the tests. I'll excerpt some of those results as I go through them\nhere. For now, I only ran input orders of sorted, random, and\nreversed.\n\n1) Specializing\n\nThis is a win in all cases, including SQL-callable comparators (the\ncase here is for _bt_sort_array_elements).\n\nNOTICE: [traditional qsort] size=8MB, order=random, cmp=arg, test=2,\ntime=0.140526\nNOTICE: [inlined] size=8MB, order=random, cmp=inline, test=0, time=0.085023\n\nNOTICE: [SQL arg] size=8MB, order=random, cmp=SQL-arg, test=2, time=0.256708\nNOTICE: [SQL inlined] size=8MB, order=random, cmp=SQL-inline, test=0,\ntime=0.192063\n\n2) Branchless operations\n\nThe int case is for how to perform the comparison, and the SQL case is\nreferring to how to reverse the sort order.Surprisingly, they don't\nseem to help for direct comparisons, and in fact they seem worse. I'll\nhave to dig a bit deeper to be sure, but it's not looking good now.\n\nNOTICE: [inlined] size=8MB, order=random, cmp=inline, test=2, time=0.084781\nNOTICE: [branchless] size=8MB, order=random, cmp=branchless, test=0,\ntime=0.091837\n\nNOTICE: [SQL inlined] size=8MB, order=random, cmp=SQL-inline, test=2,\ntime=0.192018\nNOTICE: [SQL inlined reverse] size=8MB, order=random,\ncmp=SQL-inline-rev, test=0, time=0.190797\n\nWhen the effect is reversing a list, the direct comparisons seem much\nworse, and the SQL ones aren't helped.\n\nNOTICE: [inlined] size=8MB, order=decreasing, cmp=inline, test=2, time=0.024963\nNOTICE: [branchless] size=8MB, order=decreasing, cmp=branchless,\ntest=0, time=0.036423\n\nNOTICE: [SQL inlined] size=8MB, order=decreasing, cmp=SQL-inline,\ntest=0, time=0.125182\nNOTICE: [SQL inlined reverse] size=8MB, order=increasing,\ncmp=SQL-inline-rev, test=0, time=0.127051\n\n--\nSince I have a couple more planned tests, I'll keep a running tally on\nthe current state of the patch set so that summaries are not scattered\nover many emails:\n\n0001 - bsearch and unique is good to have, and we can keep the return\ntype pending further tests\n0002/3 - I've yet to see a case where branchless comparators win, but\nother than that, these are good. Notational improvement and not\nperformance sensitive.\n\n0004/5 - Computing the arguments slows it down, but accessing the\nunderlying int16s gives an improvement. [1] Haven't done an in-situ\ntest on VACUUM. Could be worth it for pg15, since I imagine the\nproposals for dead tuple storage won't be ready this cycle.\n0006 - I expect this to be slower too. I also wonder if this could\nalso use the global function in 0004 once it's improved.\n\n0007 - untested\n\n0008 - Good performance in microbenchmarks, no in-situ testing.\nInlined reversal is not worth the binary space or notational overhead.\n\n0009 - Based on 0004, I would guess that computing the arguments is\ntoo slow. Not sure how to test in-situ to see if specializing helps.\n\n0010 - Thresholds on my TODO list.\n\n0011 - A simple correction -- I'll go ahead and commit this.\n\nv3-0001 comparators for abbreviated keys - Clearly a win, especially\nfor the \"unsigned\" case [2]. There are still possible improvements,\nbut they seem like a pg16 project(s).\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BS5SMoG8Z2PHj0bsK70CxVLgqQR1orQJq6Cjgibu26vA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAFBsxsEFGAJ9eBpQVb5a86BE93WER3497zn2OT5wbjm1HHcqgA%40mail.gmail.com\n(I just realized in that message I didn't attach the script for that,\nand also attached an extra draft spreadsheet. I'll improve the tests\nand rerun later)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 Jan 2022 18:25:00 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I wrote:\n\n> 0010 - Thresholds on my TODO list.\n\nI did some basic tests on the insertion sort thresholds, and it looks\nlike we could safely and profitably increase the current value from 7\nto 20 or so, in line with other more recent implementations. I've\nattached an addendum on top of 0012 and the full test results on an\nIntel Coffee Lake machine with gcc 11.1. I found that the object test\nsetup in 0012 had some kind of bug that was comparing the pointer of\nthe object array. Rather than fix that, I decided to use Datums, but\nwith the two extremes in comparator: simple branching with machine\ninstructions vs. a SQL-callable function. The papers I've read\nindicate the results for Datum sizes would not be much different for\nsmall structs. The largest existing sort element is SortTuple, but\nthat's only 24 bytes and has a bulky comparator as well.\n\nThe first thing to note is that I rejected outright any testing of a\n\"middle value\" where the pivot is simply the middle of the array. Even\nthe Bently and McIlroy paper which is the reference for our\nimplementation says \"The range that consists of the single integer 7\ncould be eliminated, but has been left adjustable because on some\nmachines larger ranges are a few percent better\".\n\nI tested thresholds up to 64, which is where I guessed results to get\nworse (most implementations are smaller than that). Here are the best\nthresholds at a quick glance:\n\n- elementary comparator:\n\nrandom: 16 or greater\ndecreasing, rotate: get noticeably better all the way up to 64\norgan: little difference, but seems to get better all the way up to 64\n0/1: seems to get worse above 20\n\n- SQL-callable comparator:\n\nrandom: between 12 and 20, but slight differences until 32\ndecreasing, rotate: get noticeably better all the way up to 64\norgan: seems best at 12, but slight differences until 32\n0/1: slight differences\n\nBased on these tests and this machine, it seems 20 is a good default\nvalue. I'll repeat this test on one older Intel and one non-Intel\nplatform with older compilers.\n\n--\nRunning tally of patchset:\n\n0001 - bsearch and unique is good to have, and we can keep the return\ntype pending further tests -- if none happen this cycle, suggest\ncommitting this without the return type symbol.\n0002/3 - I've yet to see a case where branchless comparators win, but\nother than that, these are good. Notational improvement and not\nperformance sensitive.\n\n0004/5 - Computing the arguments slows it down, but accessing the\nunderlying int16s gives an improvement. [1] Haven't done an in-situ\ntest on VACUUM. Could be worth it for pg15, since I imagine the\nproposals for dead tuple storage won't be ready this cycle.\n0006 - I expect this to be slower too. I also wonder if this could\nalso use the global function in 0004 once it's improved.\n\n0007 - untested\n\n0008 - Good performance in microbenchmarks, no in-situ testing.\nInlined reversal is not worth the binary space or notational overhead.\n\n0009 - Based on 0004, I would guess that computing the arguments is\ntoo slow. Not sure how to test in-situ to see if specializing helps.\n\n0010 - Suggest leaving out the middle threshold and setting the\ninsertion sort threshold to ~20. Might also name them\nST_INSERTION_SORT_THRESHOLD and ST_NINTHER_THRESHOLD. (TODO: test on\nother platforms)\n\n0011 - Committed.\n\nv3-0001 comparators for abbreviated keys - Clearly a win in this state\nalready, especially\nfor the \"unsigned\" case [2]. (gist untested) There are additional\npossible improvements mentioned,\nbut they seem like a PG16 project(s).\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BS5SMoG8Z2PHj0bsK70CxVLgqQR1orQJq6Cjgibu26vA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAFBsxsEFGAJ9eBpQVb5a86BE93WER3497zn2OT5wbjm1HHcqgA%40mail.gmail.com\n(TODO: refine test)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 Jan 2022 21:37:27 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I wrote:\n\n> > 0010 - Thresholds on my TODO list.\n>\n> I did some basic tests on the insertion sort thresholds, and it looks\n> like we could safely and profitably increase the current value from 7\n> to 20 or so, in line with other more recent implementations. I've\n> attached an addendum on top of 0012 and the full test results on an\n> Intel Coffee Lake machine with gcc 11.1. I found that the object test\n> setup in 0012 had some kind of bug that was comparing the pointer of\n> the object array. Rather than fix that, I decided to use Datums, but\n> with the two extremes in comparator: simple branching with machine\n> instructions vs. a SQL-callable function. The papers I've read\n> indicate the results for Datum sizes would not be much different for\n> small structs. The largest existing sort element is SortTuple, but\n> that's only 24 bytes and has a bulky comparator as well.\n>\n> The first thing to note is that I rejected outright any testing of a\n> \"middle value\" where the pivot is simply the middle of the array. Even\n> the Bently and McIlroy paper which is the reference for our\n> implementation says \"The range that consists of the single integer 7\n> could be eliminated, but has been left adjustable because on some\n> machines larger ranges are a few percent better\".\n>\n> I tested thresholds up to 64, which is where I guessed results to get\n> worse (most implementations are smaller than that). Here are the best\n> thresholds at a quick glance:\n>\n> - elementary comparator:\n>\n> random: 16 or greater\n> decreasing, rotate: get noticeably better all the way up to 64\n> organ: little difference, but seems to get better all the way up to 64\n> 0/1: seems to get worse above 20\n>\n> - SQL-callable comparator:\n>\n> random: between 12 and 20, but slight differences until 32\n> decreasing, rotate: get noticeably better all the way up to 64\n> organ: seems best at 12, but slight differences until 32\n> 0/1: slight differences\n>\n> Based on these tests and this machine, it seems 20 is a good default\n> value. I'll repeat this test on one older Intel and one non-Intel\n> platform with older compilers.\n\nThe above was an Intel Comet Lake / gcc 11, and I've run the same test\non a Haswell-era Xeon / gcc 8 and a Power8 machine / gcc 4.8. The\nresults on those machines are pretty close to the above (full results\nattached). The noticeable exception is the Power8 on random input with\na slow comparator -- those measurements there are more random than\nothers so we can't draw conclusions from them, but the deviations are\nsmall in any case. I'm still thinking 20 or so is about right.\n\nI've put a lot out here recently, so I'll take a break now and come\nback in a few weeks.\n\n(no running tally here because the conclusions haven't changed since\nlast message)\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Feb 2022 13:40:09 -0500", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "In a couple days I'm going to commit the v3 patch \"accelerate tuple\nsorting for common types\" as-is after giving it one more look, barring\nobjections.\n\nI started towards incorporating the change in insertion sort threshold\n(part of 0010), but that caused regression test failures, so that will\nhave to wait for a bit of analysis and retesting. (My earlier tests\nwere done in a separate module.)\n\nThe rest in this series that I looked at closely were either\nrefactoring or could use some minor tweaks so likely v16 material.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 17:09:15 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Mar 31, 2022 at 11:09 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> In a couple days I'm going to commit the v3 patch \"accelerate tuple\n> sorting for common types\" as-is after giving it one more look, barring\n> objections.\n\nHi John,\n\nThanks so much for all the work you've done here! I feel bad that I\nlobbed so many experimental patches in here and then ran away due to\nlack of cycles. That particular patch (the one cfbot has been chewing\non all this time) does indeed seem committable, despite the\ndeficiencies/opportunities listed in comments. It's nice to reduce\ncode duplication, it gives the right answers, and it goes faster.\n\n> I started towards incorporating the change in insertion sort threshold\n> (part of 0010), but that caused regression test failures, so that will\n> have to wait for a bit of analysis and retesting. (My earlier tests\n> were done in a separate module.)\n>\n> The rest in this series that I looked at closely were either\n> refactoring or could use some minor tweaks so likely v16 material.\n\nLooking forward to it.\n\n\n", "msg_date": "Fri, 1 Apr 2022 10:42:59 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Apr 1, 2022 at 4:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Mar 31, 2022 at 11:09 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > In a couple days I'm going to commit the v3 patch \"accelerate tuple\n> > sorting for common types\" as-is after giving it one more look, barring\n> > objections.\n\nPushed.\n\n> Hi John,\n>\n> Thanks so much for all the work you've done here! I feel bad that I\n> lobbed so many experimental patches in here and then ran away due to\n> lack of cycles. That particular patch (the one cfbot has been chewing\n> on all this time) does indeed seem committable, despite the\n> deficiencies/opportunities listed in comments. It's nice to reduce\n> code duplication, it gives the right answers, and it goes faster.\n\nThanks for chiming in! It gives me more confidence that there wasn't\nanything amiss that may have gone unnoticed. And no worries -- my own\nreview efforts here have been sporadic. ;-)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:38:40 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I wrote:\n\n> I started towards incorporating the change in insertion sort threshold\n> (part of 0010), but that caused regression test failures, so that will\n> have to wait for a bit of analysis and retesting. (My earlier tests\n> were done in a separate module.)\n\nThe failures seem to be where sort order is partially specified. E.g.\nORDER BY col_a, where there are duplicates there and other columns are\ndifferent. Insertion sort is stable IIRC, so moving the threshold\ncaused different orders in these cases. Some cases can be conveniently\nfixed with additional columns in the ORDER BY clause. I'll go through\nthe failures and see how much can be cleaned up as a preparatory\nrefactoring.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:50:17 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sat, Apr 2, 2022 at 9:38 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Fri, Apr 1, 2022 at 4:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Thu, Mar 31, 2022 at 11:09 PM John Naylor\n> > <john.naylor@enterprisedb.com> wrote:\n> > > In a couple days I'm going to commit the v3 patch \"accelerate tuple\n> > > sorting for common types\" as-is after giving it one more look, barring\n> > > objections.\n>\n> Pushed.\n\nIt looks like UBsan sees a problem, per BF animal kestrel:\n\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/utils/sort/tuplesort.c:722:51:\nruntime error: load of value 96, which is not a valid value for type\n'bool'\n\n#5 0x0000000000eb65d4 in qsort_tuple_int32_compare (a=0x4292ce0,\nb=0x4292cf8, state=0x4280130) at\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/utils/sort/tuplesort.c:722\n#6 qsort_tuple_int32 (data=<optimized out>, n=133,\narg=arg@entry=0x4280130) at\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/include/lib/sort_template.h:313\n#7 0x0000000000eaf747 in tuplesort_sort_memtuples\n(state=state@entry=0x4280130) at\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/utils/sort/tuplesort.c:3613\n#8 0x0000000000eaedcb in tuplesort_performsort\n(state=state@entry=0x4280130) at\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/utils/sort/tuplesort.c:2154\n#9 0x0000000000573d60 in heapam_relation_copy_for_cluster\n(OldHeap=<optimized out>, NewHeap=<optimized out>, OldIndex=<optimized\nout>, use_sort=<optimized out>, OldestXmin=11681,\nxid_cutoff=<optimized out>, multi_cutoff=0x7ffecb0cfa70,\nnum_tuples=0x7ffecb0cfa38, tups_vacuumed=0x7ffecb0cfa20,\ntups_recently_dead=0x7ffecb0cfa28) at\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/access/heap/heapam_handler.c:955\n\nReproduced locally, using the same few lines from the cluster.sql\ntest. I'll try to dig more tomorrow...\n\n\n", "msg_date": "Sat, 2 Apr 2022 23:26:31 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sat, Apr 2, 2022 at 5:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> It looks like UBsan sees a problem, per BF animal kestrel:\n>\n> /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/utils/sort/tuplesort.c:722:51:\n> runtime error: load of value 96, which is not a valid value for type\n> 'bool'\n\nYeah, same with tamandua. Then, skink (a Valgrind animal) shows:\n\n==1940791== VALGRINDERROR-BEGIN\n==1940791== Conditional jump or move depends on uninitialised value(s)\n==1940791== at 0x73D394: ApplyInt32SortComparator (sortsupport.h:311)\n==1940791== by 0x73D394: qsort_tuple_int32_compare (tuplesort.c:722)\n==1940791== by 0x73D394: qsort_tuple_int32 (sort_template.h:313)\n==1940791== by 0x7409BC: tuplesort_sort_memtuples (tuplesort.c:3613)\n==1940791== by 0x742806: tuplesort_performsort (tuplesort.c:2154)\n==1940791== by 0x23C109: heapam_relation_copy_for_cluster\n(heapam_handler.c:955)\n==1940791== by 0x35799A: table_relation_copy_for_cluster (tableam.h:1658)\n==1940791== by 0x35799A: copy_table_data (cluster.c:913)\n==1940791== by 0x359016: rebuild_relation (cluster.c:606)\n==1940791== by 0x35914E: cluster_rel (cluster.c:427)\n==1940791== by 0x3594EB: cluster (cluster.c:195)\n==1940791== by 0x5C73FF: standard_ProcessUtility (utility.c:862)\n==1940791== by 0x5C78D0: ProcessUtility (utility.c:530)\n==1940791== by 0x5C4C7B: PortalRunUtility (pquery.c:1158)\n==1940791== by 0x5C4F78: PortalRunMulti (pquery.c:1315)\n==1940791== Uninitialised value was created by a stack allocation\n==1940791== at 0x74224E: tuplesort_putheaptuple (tuplesort.c:1800)\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 Apr 2022 17:56:10 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sat, Apr 2, 2022 at 5:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Reproduced locally, using the same few lines from the cluster.sql\n> test. I'll try to dig more tomorrow...\n\nThanks! Unfortunately I can't reproduce locally with clang 13/gcc 11,\nwith -Og or -O2 with CFLAGS=\"-fsanitize=undefined,alignment\" ...\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 2 Apr 2022 18:41:30 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:41 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Sat, Apr 2, 2022 at 5:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Reproduced locally, using the same few lines from the cluster.sql\n> > test. I'll try to dig more tomorrow...\n>\n> Thanks! Unfortunately I can't reproduce locally with clang 13/gcc 11,\n> with -Og or -O2 with CFLAGS=\"-fsanitize=undefined,alignment\" ...\n\nMaybe you need to add -fno-sanitize-recover=all to make it crash,\notherwise it just prints the warning and keeps going.\n\n\n", "msg_date": "Sun, 3 Apr 2022 08:07:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sat, Apr 02, 2022 at 06:41:30PM +0700, John Naylor wrote:\n> On Sat, Apr 2, 2022 at 5:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Reproduced locally, using the same few lines from the cluster.sql\n> > test. I'll try to dig more tomorrow...\n> \n> Thanks! Unfortunately I can't reproduce locally with clang 13/gcc 11,\n> with -Og or -O2 with CFLAGS=\"-fsanitize=undefined,alignment\" ...\n\nLike Thomas just said, I had to use:\nCFLAGS=\"-Og -fsanitize=undefined,alignment -fno-sanitize-recover=all\n\nI'm a couple few steps out of my league here, but it may be an issue with:\n\ncommit 4ea51cdfe85ceef8afabceb03c446574daa0ac23\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Mon Jan 19 15:20:31 2015 -0500\n\n Use abbreviated keys for faster sorting of text datums.\n\nThis is enough to avoid the crash, which might be a useful hint..\n\n@@ -4126,22 +4126,23 @@ copytup_cluster(Tuplesortstate *state, SortTuple *stup, void *tup)\n /*\n * set up first-column key value, and potentially abbreviate, if it's a\n * simple column\n */\n+ stup->isnull1 = false;\n if (state->indexInfo->ii_IndexAttrNumbers[0] == 0)\n return;\n \n original = heap_getattr(tuple,\n state->indexInfo->ii_IndexAttrNumbers[0],\n state->tupDesc,\n &stup->isnull1);\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:20:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nOn 2022-04-03 08:07:58 +1200, Thomas Munro wrote:\n> On Sun, Apr 3, 2022 at 12:41 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > On Sat, Apr 2, 2022 at 5:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Reproduced locally, using the same few lines from the cluster.sql\n> > > test. I'll try to dig more tomorrow...\n> >\n> > Thanks! Unfortunately I can't reproduce locally with clang 13/gcc 11,\n> > with -Og or -O2 with CFLAGS=\"-fsanitize=undefined,alignment\" ...\n> \n> Maybe you need to add -fno-sanitize-recover=all to make it crash,\n> otherwise it just prints the warning and keeps going.\n\nI commented with a few more details on https://postgr.es/m/20220402201557.thanbsxcql5lk6pc%40alap3.anarazel.de\nand an preliminary analysis in\nhttps://www.postgresql.org/message-id/20220402203344.ahup2u5n73cdbbcv%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Apr 2022 13:37:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Apr 3, 2022 at 8:20 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> @@ -4126,22 +4126,23 @@ copytup_cluster(Tuplesortstate *state, SortTuple *stup, void *tup)\n\n> + stup->isnull1 = false;\n\nLooks like I might have failed to grok the scheme for encoding null\ninto SortTuple objects. It's clearly uninitialised in some paths,\nwith a special 0 value in datum1. Will need to look more closely with\nmore coffee...\n\n\n", "msg_date": "Sun, 3 Apr 2022 08:37:40 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nOn 2022-04-02 15:20:27 -0500, Justin Pryzby wrote:\n> On Sat, Apr 02, 2022 at 06:41:30PM +0700, John Naylor wrote:\n> > On Sat, Apr 2, 2022 at 5:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Reproduced locally, using the same few lines from the cluster.sql\n> > > test. I'll try to dig more tomorrow...\n> > \n> > Thanks! Unfortunately I can't reproduce locally with clang 13/gcc 11,\n> > with -Og or -O2 with CFLAGS=\"-fsanitize=undefined,alignment\" ...\n> \n> Like Thomas just said, I had to use:\n> CFLAGS=\"-Og -fsanitize=undefined,alignment -fno-sanitize-recover=all\n> \n> I'm a couple few steps out of my league here, but it may be an issue with:\n> \n> commit 4ea51cdfe85ceef8afabceb03c446574daa0ac23\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: Mon Jan 19 15:20:31 2015 -0500\n> \n> Use abbreviated keys for faster sorting of text datums.\n> \n> This is enough to avoid the crash, which might be a useful hint..\n>\n> @@ -4126,22 +4126,23 @@ copytup_cluster(Tuplesortstate *state, SortTuple *stup, void *tup)\n> /*\n> * set up first-column key value, and potentially abbreviate, if it's a\n> * simple column\n> */\n> + stup->isnull1 = false;\n> if (state->indexInfo->ii_IndexAttrNumbers[0] == 0)\n> return;\n> \n> original = heap_getattr(tuple,\n> state->indexInfo->ii_IndexAttrNumbers[0],\n> state->tupDesc,\n> &stup->isnull1);\n\nI don't think that can be correct - the column can be NULL afaics. And I don't\nthink in that patch it's needed, because it always goes through ->comparetup()\nwhen state->onlyKey isn't explicitly set. Which tuplesort_begin_cluster() as\nwell as several others don't. And you'd just sort an uninitialized datum\nimmediately after.\n\nIt's certainly not pretty that copytup_cluster() can use SortTuples without\nactually using SortTuples. Afaics it basically only computes isnull1/datum1 if\nstate->indexInfo->ii_IndexAttrNumbers[0] == 0.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Apr 2022 14:03:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Apr 3, 2022 at 9:03 AM Andres Freund <andres@anarazel.de> wrote:\n> It's certainly not pretty that copytup_cluster() can use SortTuples without\n> actually using SortTuples. Afaics it basically only computes isnull1/datum1 if\n> state->indexInfo->ii_IndexAttrNumbers[0] == 0.\n\nI think we just need to decide up front if we're in a situation that\ncan't provide datum1/isnull1 (in this case because it's an expression\nindex), and skip the optimised paths. Here's an experimental patch...\nstill looking into whether there are more cases like this...\n\n(There's also room to recognise when you don't even need to look at\nisnull1 for a less branchy optimised sort, but that was already\ndiscussed and put off for later.)", "msg_date": "Sun, 3 Apr 2022 09:45:13 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nOn 2022-04-03 09:45:13 +1200, Thomas Munro wrote:\n> On Sun, Apr 3, 2022 at 9:03 AM Andres Freund <andres@anarazel.de> wrote:\n> > It's certainly not pretty that copytup_cluster() can use SortTuples without\n> > actually using SortTuples. Afaics it basically only computes isnull1/datum1 if\n> > state->indexInfo->ii_IndexAttrNumbers[0] == 0.\n> \n> I think we just need to decide up front if we're in a situation that\n> can't provide datum1/isnull1 (in this case because it's an expression\n> index), and skip the optimised paths. Here's an experimental patch...\n> still looking into whether there are more cases like this...\n\nThat's a lot of redundant checks. How about putting all the checks for\noptimized paths into one if (state->sortKeys && !state->disable_datum1)?\n\nI'm a bit worried that none of the !ubsan tests failed on this...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 2 Apr 2022 16:11:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Apr 3, 2022 at 11:11 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-04-03 09:45:13 +1200, Thomas Munro wrote:\n> > I think we just need to decide up front if we're in a situation that\n> > can't provide datum1/isnull1 (in this case because it's an expression\n> > index), and skip the optimised paths. Here's an experimental patch...\n> > still looking into whether there are more cases like this...\n\nI didn't find anything else.\n\nMaybe it'd be better if we explicitly declared whether datum1 is used\nin each tuplesort mode's 'begin' function, right next to the code that\ninstalls the set of routines that are in control of that? Trying that\nin this version. Is it clearer what's going on like this?\n\n> That's a lot of redundant checks. How about putting all the checks for\n> optimized paths into one if (state->sortKeys && !state->disabl1e_datum1)?\n\nOK, sure.\n\n> I'm a bit worried that none of the !ubsan tests failed on this...\n\nIn accordance with whoever-it-was-that-said-that's law about things\nthat aren't tested, this are turned out to be broken already[1]. Once\nwe fix that we should have a new test in the three that might also\neventually have failed under this UB, given enough chaos.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BbA%2BbmwD36_oDxAoLrCwZjVtST2fqe%3Db4%3DqZcmU7u89A%40mail.gmail.com", "msg_date": "Sun, 3 Apr 2022 17:46:28 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nOn 2022-04-03 17:46:28 +1200, Thomas Munro wrote:\n> On Sun, Apr 3, 2022 at 11:11 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-04-03 09:45:13 +1200, Thomas Munro wrote:\n> > > I think we just need to decide up front if we're in a situation that\n> > > can't provide datum1/isnull1 (in this case because it's an expression\n> > > index), and skip the optimised paths. Here's an experimental patch...\n> > > still looking into whether there are more cases like this...\n> \n> I didn't find anything else.\n> \n> Maybe it'd be better if we explicitly declared whether datum1 is used\n> in each tuplesort mode's 'begin' function, right next to the code that\n> installs the set of routines that are in control of that? Trying that\n> in this version. Is it clearer what's going on like this?\n\nSeems an improvement.\n\n\n> > I'm a bit worried that none of the !ubsan tests failed on this...\n> \n> In accordance with whoever-it-was-that-said-that's law about things\n> that aren't tested, this are turned out to be broken already[1].\n\nYea :/.\n\n\nWould be good to get this committed soon, so we can see further ubsan\nviolations introduced in the next few days (and so I can unblock my local dev\ntests :P).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 3 Apr 2022 09:32:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, Apr 4, 2022 at 4:32 AM Andres Freund <andres@anarazel.de> wrote:\n> Would be good to get this committed soon, so we can see further ubsan\n> violations introduced in the next few days (and so I can unblock my local dev\n> tests :P).\n\nPushed (with a minor tweak).\n\n\n", "msg_date": "Mon, 4 Apr 2022 11:01:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Here is the updated insertion sort threshold patch based on Thomas'\nexperimental v4 0010, with adjusted regression test output. I only\nfound a couple places where it could make sense to add sort keys to\ntest queries, but 1) not enough to make a big difference and 2) the\nadjustments looked out of place, so I decided to just update all the\nregression tests in one go. Since the patch here is a bit more (and\nless) involved than Thomas' 0010, I'm going to refrain from committing\nuntil it gets review. If not in the next couple days, I will bring it\nup at the beginning of the v16 cycle.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Apr 2022 17:31:39 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Hi,\n\nDavid Rowley privately reported a performance regression when sorting\nsingle ints with a lot of duplicates, in a case that previously hit\nqsort_ssup() but now hits qsort_tuple_int32() and then has to call the\ntiebreaker comparator. Note that this comes up only for sorts in a\nquery, not for eg index builds which always have to tiebreak on item\nptr. I don't have data right now but that'd likely be due to:\n\n+ * XXX: For now, there is no specialization for cases where datum1 is\n+ * authoritative and we don't even need to fall back to a callback at all (that\n+ * would be true for types like int4/int8/timestamp/date, but not true for\n+ * abbreviations of text or multi-key sorts. There could be! Is it worth it?\n\nUpthread we were discussing which variations it'd be worth investing\nextra text segment space on to gain speedup and we put those hard\ndecisions off for future work, but on reflection, we probably should\ntackle this particular point to avoid a regression. I think something\nlike the attached achieves that (draft, not tested much yet, could\nperhaps find a tidier way to code the decision tree). In short:\nvariants qsort_tuple_{int32,signed,unsigned}() no longer fall back,\nbut new variants qsort_tuple_{int32,signed,unsigned}_tiebreak() do.\n\nWe should perhaps also reconsider the other XXX comment about finding\na way to skip the retest of column 1 in the tiebreak comparator.\nPerhaps you'd just install a different comparetup function, eg\ncomparetup_index_btree_tail (which would sharing code), so no need to\nmultiply specialisations for that.\n\nPlanning to look at this more closely after I've sorted out some other\nproblems, but thought I'd post this draft/problem report early in case\nJohn or others have thoughts or would like to run some experiments.", "msg_date": "Mon, 11 Apr 2022 09:44:02 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Sun, Apr 10, 2022 at 2:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> David Rowley privately reported a performance regression when sorting\n> single ints with a lot of duplicates, in a case that previously hit\n> qsort_ssup() but now hits qsort_tuple_int32() and then has to call the\n> tiebreaker comparator.\n\nThat's not good.\n\nThe B&M quicksort implementation that we adopted is generally\nextremely fast for that case, since it uses 3 way partitioning (based\non the Dutch National Flag algorithm). This essentially makes sorting\nlarge groups of duplicates take only linear time (not linearithmic\ntime).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 10 Apr 2022 14:54:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, 11 Apr 2022 at 09:44, Thomas Munro <thomas.munro@gmail.com> wrote:\n> David Rowley privately reported a performance regression when sorting\n> single ints with a lot of duplicates, in a case that previously hit\n> qsort_ssup() but now hits qsort_tuple_int32() and then has to call the\n> tiebreaker comparator. Note that this comes up only for sorts in a\n> query, not for eg index builds which always have to tiebreak on item\n> ptr. I don't have data right now but that'd likely be due to:\n\nYeah, I noticed this when running some sort benchmarks to compare v14\nwith master (as of Thursday last week).\n\nThe biggest slowdown I saw was the test that sorted 1 million tuples\non a BIGINT column with 100 distinct values. The test in question\ndoes sorts on the same column each time, but continually adds columns,\nwhich I was doing to check how wider tuples changed the performance\n(this was for the exercise of 40af10b57 rather than this work).\n\nWith this particular test, v15 is about 15% *slower* than v14. I\ndidn't know what to blame at first, so I tried commenting out the sort\nspecialisations and got the results in the red bars in the graph. This\nmade it about 7.5% *faster* than v14. So looks like this patch is to\nblame. I then hacked the comparator function that's used in the\nspecialisations for BIGINT to comment out the tiebreak to remove the\nindirect function call, which happens to do nothing in this 1 column\nsort case. The aim here was to get an idea what the performance would\nbe if there was a specialisation for single column sorts. That's the\nyellow bars, which show about 10% *faster* than master.", "msg_date": "Mon, 11 Apr 2022 10:34:27 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, 11 Apr 2022 at 09:44, Thomas Munro <thomas.munro@gmail.com> wrote:\n> David Rowley privately reported a performance regression when sorting\n> single ints with a lot of duplicates, in a case that previously hit\n> qsort_ssup() but now hits qsort_tuple_int32() and then has to call the\n> tiebreaker comparator. Note that this comes up only for sorts in a\n> query, not for eg index builds which always have to tiebreak on item\n> ptr. I don't have data right now but that'd likely be due to:\n\nI've now added this as an open item for v15.\n\nDavid\n\n\n", "msg_date": "Mon, 11 Apr 2022 12:25:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, Apr 11, 2022 at 5:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> With this particular test, v15 is about 15% *slower* than v14. I\n> didn't know what to blame at first, so I tried commenting out the sort\n> specialisations and got the results in the red bars in the graph. This\n> made it about 7.5% *faster* than v14. So looks like this patch is to\n> blame. I then hacked the comparator function that's used in the\n> specialisations for BIGINT to comment out the tiebreak to remove the\n> indirect function call, which happens to do nothing in this 1 column\n> sort case. The aim here was to get an idea what the performance would\n> be if there was a specialisation for single column sorts. That's the\n> yellow bars, which show about 10% *faster* than master.\n\nThanks for investigating! (I assume you meant 10% faster than v14?)\n\nOn Mon, Apr 11, 2022 at 4:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> The B&M quicksort implementation that we adopted is generally\n> extremely fast for that case, since it uses 3 way partitioning (based\n> on the Dutch National Flag algorithm). This essentially makes sorting\n> large groups of duplicates take only linear time (not linearithmic\n> time).\n\nIn the below thread, I wondered if it still counts as extremely fast\nnowadays. I hope to give an answer to that during next cycle. Relevant\nto the open item, the paper linked there has a variety of\nlow-cardinality cases. I'll incorporate them in a round of tests soon.\n\nhttps://www.postgresql.org/message-id/CAFBsxsHanJTsX9DNJppXJxwg3bU+YQ6pnmSfPM0uvYUaFdwZdQ@mail.gmail.com\n\nOn Mon, Apr 11, 2022 at 4:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Upthread we were discussing which variations it'd be worth investing\n> extra text segment space on to gain speedup and we put those hard\n> decisions off for future work, but on reflection, we probably should\n> tackle this particular point to avoid a regression. I think something\n> like the attached achieves that (draft, not tested much yet, could\n> perhaps find a tidier way to code the decision tree). In short:\n> variants qsort_tuple_{int32,signed,unsigned}() no longer fall back,\n> but new variants qsort_tuple_{int32,signed,unsigned}_tiebreak() do.\n\nLooks good at a glance, I will get some numbers after modifying my test scripts.\n\n> We should perhaps also reconsider the other XXX comment about finding\n> a way to skip the retest of column 1 in the tiebreak comparator.\n> Perhaps you'd just install a different comparetup function, eg\n> comparetup_index_btree_tail (which would sharing code), so no need to\n> multiply specialisations for that.\n\nIf we need to add these cases to avoid regression, it makes sense to\nmake them work as well as we reasonably can.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 17:11:47 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, 11 Apr 2022 at 22:11, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Mon, Apr 11, 2022 at 5:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> > With this particular test, v15 is about 15% *slower* than v14. I\n> > didn't know what to blame at first, so I tried commenting out the sort\n> > specialisations and got the results in the red bars in the graph. This\n> > made it about 7.5% *faster* than v14. So looks like this patch is to\n> > blame. I then hacked the comparator function that's used in the\n> > specialisations for BIGINT to comment out the tiebreak to remove the\n> > indirect function call, which happens to do nothing in this 1 column\n> > sort case. The aim here was to get an idea what the performance would\n> > be if there was a specialisation for single column sorts. That's the\n> > yellow bars, which show about 10% *faster* than master.\n>\n> Thanks for investigating! (I assume you meant 10% faster than v14?)\n\nYes, I did mean to say v14. (I'm too used to comparing everything to master)\n\nDavid\n\n\n", "msg_date": "Tue, 12 Apr 2022 12:40:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Mon, 11 Apr 2022 at 09:44, Thomas Munro <thomas.munro@gmail.com> wrote:\n> Planning to look at this more closely after I've sorted out some other\n> problems, but thought I'd post this draft/problem report early in case\n> John or others have thoughts or would like to run some experiments.\n\nThanks for putting the patch together.\n\nI had a look at the patch and I wondered if we really need to add an\nentire dimension of sort functions for just this case. My thought\nprocess here is that when I look at a function such as\nApplySignedSortComparator(), I think that it might be better to save\nadding another dimension for a sort case such as a column that does\nnot contain any NULLs. There's quite a bit more branching saved from\ngetting rid of NULL tests there than what we could save by checking if\nwe need to call the tiebreaker function in a function like\nqsort_tuple_signed_compare().\n\nI didn't really know what the performance implications would be of\nchecking an extra flag would be, so I very quickly put a patch\ntogether and ran the benchmarks.\n\nThe 4GB work_mem 1 million tuple test with values MOD 100 comes out as:\n\nThomas' patch: 10.13% faster than v14\nMy patch: 9.48% faster than v14\nmaster: 15.62% *slower* than v14\n\nSo it does seem like we can fix the regression in a more simple way.\nWe could then maybe do some more meaningful performance tests during\nthe v16 cycle to explore the most useful dimension to add that gains\nthe most performance. Perhaps that's NULLs, or maybe it's something\nelse.\n\nI've attached the patch I tested. It was thrown together very quickly\njust to try out the performance. If it's interesting I can polish it\nup a bit. If not, I didn't waste too much time.\n\nDavid", "msg_date": "Tue, 12 Apr 2022 12:58:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "As promised, I've done another round of tests (script and spreadsheet\nattached) with\n\n- v15 with 6974924347 and cc58eecc5d reverted\n- v15 with Thomas' patch\n- v15 with David's patch\n- v15 as is (\"std\")\n\n...where v15 is at 7b735f8b52ad. This time I limited it to int,\nbigint, and text types.\n\nSince more cases now use random distributions, I also took some\nmeasures to tighten up the measurements:\n\n- Reuse the same random distribution for all tests where the input is\nrandomized, by invoking the script with/without a second parameter\n- For the text case, use lpadded ints so that lexicographic order is\nthe same as numeric order.\n\nI verified David's mod100 test case and added most test cases from the\nOrson Peters paper I mentioned above. I won't explain all of them\nhere, but the low cardinality ones are randomized sets of:\n\n- mod8\n- dupsq: x mod sqrt(n) , for 10 million about 3 thousand distinct values\n- dup8: (x**8 + n/2) mod n , for 10 million about 80 thousand distinct\nvalues, about 80% with 64 duplicates and 20% with 256 duplicates\n\nAll the clear regressions I can see in v15 are in the above for one or\nmore query types / data types, and both Thomas and David's patches\nrestore performance for those.\n\nMore broadly than the regression, Thomas' is very often the fastest of\nall, at the cost of more binary size. David's is occasionally slower\nthan v15 or v15 with revert, but much of that is a slight difference\nand some is probably noise.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 13 Apr 2022 18:19:19 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Wed, 13 Apr 2022 at 23:19, John Naylor <john.naylor@enterprisedb.com> wrote:\n> More broadly than the regression, Thomas' is very often the fastest of\n> all, at the cost of more binary size. David's is occasionally slower\n> than v15 or v15 with revert, but much of that is a slight difference\n> and some is probably noise.\n\nJust to get an opinion from some other hardware, I've run your test\nscript on my AMD 3990x machine.\n\nMy opinion here is that the best thing we can learn from both of our\nresults is, do the patches fix the regression?\n\nI don't believe it should be about if adding the additional\nspecializations performs better than skipping the tie break function\ncall. I think it's pretty obvious that the specializations will be\nfaster. I think if it was decided that v16 would be the version where\nmore work should be done to decide on what should be specialized and\nwhat shouldn't be, then we shouldn't let this regression force our\nhand to make that choice now. It'll be pretty hard to remove any\nspecializations once they've been in a released version of Postgres.\n\nDavid", "msg_date": "Thu, 14 Apr 2022 18:46:00 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, Apr 14, 2022 at 1:46 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 13 Apr 2022 at 23:19, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > More broadly than the regression, Thomas' is very often the fastest of\n> > all, at the cost of more binary size. David's is occasionally slower\n> > than v15 or v15 with revert, but much of that is a slight difference\n> > and some is probably noise.\n\nTo add to my summary of results - the v15 code, with and without extra\npatches, seems slightly worse on B-tree index creation for very low\ncardinality keys, but that's not an index that's going to be useful\n(and therefore common) so that's a good tradeoff in my view. The\nregression David found is more concerning.\n\n> Just to get an opinion from some other hardware, I've run your test\n> script on my AMD 3990x machine.\n\nThanks for that. I only see 4 non-Btree measurements in your results\nthat are larger than v15-revert, versus 8 in mine (Comet Lake). And\noverall, most of those seem within the noise level.\n\n> My opinion here is that the best thing we can learn from both of our\n> results is, do the patches fix the regression?\n\nI'd say the answer is yes for both.\n\n> I don't believe it should be about if adding the additional\n> specializations performs better than skipping the tie break function\n> call. I think it's pretty obvious that the specializations will be\n> faster. I think if it was decided that v16 would be the version where\n> more work should be done to decide on what should be specialized and\n> what shouldn't be, then we shouldn't let this regression force our\n> hand to make that choice now. It'll be pretty hard to remove any\n> specializations once they've been in a released version of Postgres.\n\nI agree that a narrow fix is preferable. I'll take a closer look at\nyour patch soon.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Apr 2022 15:58:08 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Apr 12, 2022 at 7:58 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I've attached the patch I tested. It was thrown together very quickly\n> just to try out the performance. If it's interesting I can polish it\n> up a bit. If not, I didn't waste too much time.\n\n@@ -959,6 +965,10 @@ tuplesort_begin_batch(Tuplesortstate *state)\n\n state->tapeset = NULL;\n\n+ /* check if specialized sorts can skip calling the tiebreak function */\n+ state->oneKeySort = state->nKeys == 1 &&\n+ !state->sortKeys[0].abbrev_converter;\n+\n\nIIUC, this function is called by tuplesort_begin_common, which in turn\nis called by tuplesort_begin_{heap, indexes, etc}. The latter callers\nset the onlyKey and now oneKeySort variables as appropriate, and\nsometimes hard-coded to false. Is it intentional to set them here\nfirst?\n\nFalling under the polish that you were likely thinking of above:\n\nWe might rename oneKeySort to skipTiebreaker to avoid confusion.\nSInce the test for these variable is the same, we could consolidate\nthem into a block and reword this existing comment (which I find a\nlittle confusing anyway):\n\n/*\n* The \"onlyKey\" optimization cannot be used with abbreviated keys, since\n* tie-breaker comparisons may be required. Typically, the optimization\n* is only of value to pass-by-value types anyway, whereas abbreviated\n* keys are typically only of value to pass-by-reference types.\n*/\n\nI can take a stab at this, unless you had something else in mind.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 18 Apr 2022 21:11:22 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Thanks for looking at this.\n\nOn Tue, 19 Apr 2022 at 02:11, John Naylor <john.naylor@enterprisedb.com> wrote:\n> IIUC, this function is called by tuplesort_begin_common, which in turn\n> is called by tuplesort_begin_{heap, indexes, etc}. The latter callers\n> set the onlyKey and now oneKeySort variables as appropriate, and\n> sometimes hard-coded to false. Is it intentional to set them here\n> first?\n>\n> Falling under the polish that you were likely thinking of above:\n\nI did put the patch together quickly just for the benchmark and at the\ntime I was subtly aware that the onlyKey field was being set using a\nsimilar condition as I was using to set the boolean field I'd added.\nOn reflection today, it should be fine just to check if that field is\nNULL or not in the 3 new comparison functions. Similarly to before,\nthis only needs to be done if the datums compare equally, so does not\nadd any code to the path where the datums are non-equal. It looks\nlike the other tuplesort_begin_* functions use a different comparison\nfunction that will never make use of the specialization comparison\nfunctions added by 697492434.\n\nI separated out the \"or\" condition that I'd added tot he existing \"if\"\nto make it easier to write a comment explaining why we can skip the\ntiebreak function call.\n\nUpdated patch attached.\n\nDavid", "msg_date": "Tue, 19 Apr 2022 17:29:56 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Tue, Apr 19, 2022 at 12:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Thanks for looking at this.\n>\n> On Tue, 19 Apr 2022 at 02:11, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > IIUC, this function is called by tuplesort_begin_common, which in turn\n> > is called by tuplesort_begin_{heap, indexes, etc}. The latter callers\n> > set the onlyKey and now oneKeySort variables as appropriate, and\n> > sometimes hard-coded to false. Is it intentional to set them here\n> > first?\n> >\n> > Falling under the polish that you were likely thinking of above:\n>\n> I did put the patch together quickly just for the benchmark and at the\n> time I was subtly aware that the onlyKey field was being set using a\n> similar condition as I was using to set the boolean field I'd added.\n> On reflection today, it should be fine just to check if that field is\n> NULL or not in the 3 new comparison functions. Similarly to before,\n> this only needs to be done if the datums compare equally, so does not\n> add any code to the path where the datums are non-equal. It looks\n> like the other tuplesort_begin_* functions use a different comparison\n> function that will never make use of the specialization comparison\n> functions added by 697492434.\n\nOkay, this makes logical sense and is a smaller patch to boot. I've\nre-run my tests (attached) to make sure we have our bases covered. I'm\nsharing the min-of-five, as before, but locally I tried . The\nregression is fixed, and most other differences from v15 seem to be\nnoise. It's possible the naturally fastest cases (pre-sorted ints and\nbigints) are slower than v15-revert than expected from noise, but it's\nnot clear.\n\nI think this is good to go.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 19 Apr 2022 20:55:34 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "> Okay, this makes logical sense and is a smaller patch to boot. I've\n> re-run my tests (attached) to make sure we have our bases covered. I'm\n> sharing the min-of-five, as before, but locally I tried . The\n\nMy sentence there was supposed to read \"I tried using median and it\nwas a bit less noisy\".\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 19 Apr 2022 20:56:40 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I intend to commit David's v2 fix next week, unless there are\nobjections, or unless he beats me to it.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 21 Apr 2022 14:09:20 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, 21 Apr 2022 at 19:09, John Naylor <john.naylor@enterprisedb.com> wrote:\n> I intend to commit David's v2 fix next week, unless there are\n> objections, or unless he beats me to it.\n\nI wasn't sure if you wanted to handle it or not, but I don't mind\ndoing it, so I just pushed it after a small adjustment to a comment.\n\nBefore going ahead with it I did test a 2-key sort where the leading\nkey values were all the same. I wondered if we'd still see any\nregression from having to re-compare the leading key all over again.\n\nI just did:\n\ncreate table ab (a bigint, b bigint);\ninsert into ab select 0,x from generate_series(1,1000000)x;\nvacuum freeze ab;\n\nI then ran:\nselect * from ab order by a,b offset 1000000;\n\n697492434 (Specialize tuplesort routines for different kinds of\nabbreviated keys)\n$ pgbench -n -f bench1.sql -T 60 -M prepared postgres\ntps = 10.651740 (without initial connection time)\ntps = 10.813647 (without initial connection time)\ntps = 10.648960 (without initial connection time)\n\n697492434~1 (Remove obsolete comment)\n$ pgbench -n -f bench1.sql -T 60 -M prepared postgres\ntps = 9.957163 (without initial connection time)\ntps = 10.191168 (without initial connection time)\ntps = 10.145281 (without initial connection time)\n\nSo it seems there was no regression for that case, at least, not on\nthe AMD machine that I tested on.\n\nDavid\n\n\n", "msg_date": "Fri, 22 Apr 2022 16:13:08 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Apr 22, 2022 at 11:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 21 Apr 2022 at 19:09, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > I intend to commit David's v2 fix next week, unless there are\n> > objections, or unless he beats me to it.\n>\n> I wasn't sure if you wanted to handle it or not, but I don't mind\n> doing it, so I just pushed it after a small adjustment to a comment.\n\nThank you!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Apr 2022 11:37:29 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Apr 22, 2022 at 4:37 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Fri, Apr 22, 2022 at 11:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Thu, 21 Apr 2022 at 19:09, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > > I intend to commit David's v2 fix next week, unless there are\n> > > objections, or unless he beats me to it.\n> >\n> > I wasn't sure if you wanted to handle it or not, but I don't mind\n> > doing it, so I just pushed it after a small adjustment to a comment.\n>\n> Thank you!\n\nThanks both for working on this. Seems like a good call to defer the\nchoice of further specialisations.\n\n\n", "msg_date": "Fri, 22 Apr 2022 17:10:49 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, Apr 22, 2022 at 11:37:29AM +0700, John Naylor wrote:\n> On Fri, Apr 22, 2022 at 11:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Thu, 21 Apr 2022 at 19:09, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > > I intend to commit David's v2 fix next week, unless there are\n> > > objections, or unless he beats me to it.\n> >\n> > I wasn't sure if you wanted to handle it or not, but I don't mind\n> > doing it, so I just pushed it after a small adjustment to a comment.\n> \n> Thank you!\n\nShould these debug lines be removed ?\n\nelog(DEBUG1, \"qsort_tuple\");\n\nPerhaps if I ask for debug output, I shouldn't be surprised if it changes\nbetween major releases - but I still found this surprising.\n\nI'm sure it's useful during development and maybe during beta. It could even\nmake sense if it were shown during regression tests (preferably at DEBUG2).\nBut right now it's not. is that\n\nts=# \\dt\nDEBUG: qsort_tuple\nList of relations\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 19 May 2022 15:12:54 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Thu, May 19, 2022 at 1:12 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Should these debug lines be removed ?\n>\n> elog(DEBUG1, \"qsort_tuple\");\n\nI agree -- DEBUG1 seems too chatty for something like this. DEBUG2\nwould be more appropriate IMV. Though I don't feel very strongly about\nit.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 May 2022 15:24:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, May 19, 2022 at 1:12 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> Should these debug lines be removed ?\n>> \n>> elog(DEBUG1, \"qsort_tuple\");\n\n> I agree -- DEBUG1 seems too chatty for something like this. DEBUG2\n> would be more appropriate IMV. Though I don't feel very strongly about\n> it.\n\nGiven the lack of context identification, I'd put the usefulness of\nthese in production at close to zero. +1 for removing them\naltogether, or failing that, downgrade to DEBUG5 or so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 May 2022 18:43:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "On Fri, May 20, 2022 at 5:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Thu, May 19, 2022 at 1:12 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> Should these debug lines be removed ?\n> >>\n> >> elog(DEBUG1, \"qsort_tuple\");\n>\n> > I agree -- DEBUG1 seems too chatty for something like this. DEBUG2\n> > would be more appropriate IMV. Though I don't feel very strongly about\n> > it.\n>\n> Given the lack of context identification, I'd put the usefulness of\n> these in production at close to zero. +1 for removing them\n> altogether, or failing that, downgrade to DEBUG5 or so.\n\nI agree this is only useful in development. Removal sounds fine to me,\nso I'll do that soon.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 May 2022 13:40:25 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" }, { "msg_contents": "I wrote:\n> I agree this is only useful in development. Removal sounds fine to me,\n> so I'll do that soon.\n\nThis is done.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 May 2022 13:17:05 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: A qsort template" } ]
[ { "msg_contents": "Based on the discussion at:\n\nhttps://www.postgresql.org/message-id/6929d485-2d2a-da46-3681-4a400a3d794f%40enterprisedb.com\n\nI'm posting the patch for $subject here in this new thread and I'll\nadd it to the next CF per Tomas' advice.\n\nWith 927f453a94106 committed earlier today, we limit insert batching\nonly to the cases where the query's main command is also insert,\nbecause allowing it to be used in other cases can hit some limitations\nof the current code.\n\nOne such case is cross-partition updates of a partitioned table which\ninternally uses insert. postgres_fdw supports some cases where a row\nis moved from a local partition to a foreign partition. When doing\nso, the moved row is inserted into the latter, but those inserts can't\nuse batching due to the aforementioned commit.\n\nAs described in the thread linked above, to make batching possible for\nthose internal inserts, we'd need to make some changes to both the\ncore code and postgres_fdw, which the attached patch implements.\nDetails are in the commit message.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Feb 2021 18:52:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Allow batched insert during cross-partition updates" }, { "msg_contents": "On Thu, Feb 18, 2021 at 6:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Based on the discussion at:\n>\n> https://www.postgresql.org/message-id/6929d485-2d2a-da46-3681-4a400a3d794f%40enterprisedb.com\n>\n> I'm posting the patch for $subject here in this new thread and I'll\n> add it to the next CF per Tomas' advice.\n\nDone: https://commitfest.postgresql.org/32/2992/\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Feb 2021 18:54:44 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi,\r\n\r\nthanks for the patch. I had a first look and played around with the code.\r\n\r\nThe code seems clean, complete, and does what it says on the tin. I will\r\nneed a bit more time to acclimatise with all the use cases for a more\r\nthorough review.\r\n\r\nI small question though is why expose PartitionTupleRouting and not add\r\na couple of functions to get the necessary info? If I have read the code\r\ncorrectly the only members actually needed to be exposed are num_partitions\r\nand partitions. Not a critique, I am just curious.\r\n\r\nCheers,\r\n//Georgios", "msg_date": "Tue, 09 Mar 2021 15:53:09 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "\nHi,\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, February 18, 2021 10:54 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Thu, Feb 18, 2021 at 6:52 PM Amit Langote amitlangote09@gmail.com wrote:\n>\n> > Based on the discussion at:\n> > https://www.postgresql.org/message-id/6929d485-2d2a-da46-3681-4a400a3d794f%40enterprisedb.com\n> > I'm posting the patch for $subject here in this new thread and I'll\n> > add it to the next CF per Tomas' advice.\n>\n> Done:https://commitfest.postgresql.org/32/2992/\n>\n> --------------------------------------------------\n\napparently I did not receive the review comment I sent via the commitfest app.\nApologies for the chatter. Find the message-id here:\n\n\n\n>\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n\n\n", "msg_date": "Wed, 10 Mar 2021 12:23:30 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Wednesday, March 10, 2021 1:23 PM, Georgios <gkokolatos@protonmail.com> wrote:\n\n>\n>\n> Hi,\n>\n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Thursday, February 18, 2021 10:54 AM, Amit Langote amitlangote09@gmail.com wrote:\n>\n> > On Thu, Feb 18, 2021 at 6:52 PM Amit Langote amitlangote09@gmail.com wrote:\n> >\n> > > Based on the discussion at:\n> > > https://www.postgresql.org/message-id/6929d485-2d2a-da46-3681-4a400a3d794f%40enterprisedb.com\n> > > I'm posting the patch for $subject here in this new thread and I'll\n> > > add it to the next CF per Tomas' advice.\n> >\n> > Done:https://commitfest.postgresql.org/32/2992/\n>\n> apparently I did not receive the review comment I sent via the commitfest app.\n> Apologies for the chatter. Find the message-id here:\nhttps://www.postgresql.org/message-id/161530518971.29967.9368488207318158252.pgcf%40coridan.postgresql.org\n\nI continued looking a bit at the patch, yet I am either failing to see fix or I am\nlooking at the wrong thing. Please find attached a small repro of what my expectetions\nwere.\n\nAs you can see in the repro, I would expect the\n UPDATE local_root_remote_partitions SET a = 2;\nto move the tuples to remote_partition_2 on the same transaction.\nHowever this is not the case, with or without the patch.\n\nIs my expectation of this patch wrong?\n\nCheers,\n//Georgios\n\n>\n> > Amit Langote\n> > EDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Mar 2021 12:30:08 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi Georgios,\n\nOn Wed, Mar 10, 2021 at 12:54 AM Georgios Kokolatos\n<gkokolatos@protonmail.com> wrote:\n>\n> Hi,\n>\n> thanks for the patch. I had a first look and played around with the code.\n>\n> The code seems clean, complete, and does what it says on the tin. I will\n> need a bit more time to acclimatise with all the use cases for a more\n> thorough review.\n\nThanks for checking.\n\n> I small question though is why expose PartitionTupleRouting and not add\n> a couple of functions to get the necessary info? If I have read the code\n> correctly the only members actually needed to be exposed are num_partitions\n> and partitions. Not a critique, I am just curious.\n\nI had implemented accessor functions in an earlier unposted version of\nthe patch, but just exposing PartitionTupleRouting does not sound so\nharmful, so I switched to that approach. Maybe if others agree with\nyou that accessor functions would be better, I will change the patch\nthat way.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Mar 2021 17:09:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi Georgios,\n\nOn Wed, Mar 10, 2021 at 9:30 PM Georgios <gkokolatos@protonmail.com> wrote:\n> I continued looking a bit at the patch, yet I am either failing to see fix or I am\n> looking at the wrong thing. Please find attached a small repro of what my expectetions\n> were.\n>\n> As you can see in the repro, I would expect the\n> UPDATE local_root_remote_partitions SET a = 2;\n> to move the tuples to remote_partition_2 on the same transaction.\n> However this is not the case, with or without the patch.\n>\n> Is my expectation of this patch wrong?\n\nI think yes. We currently don't have the feature you are looking for\n-- moving tuples from one remote partition to another remote\npartition. This patch is not for adding that feature.\n\nWhat we do support however is moving rows from a local partition to a\nremote partition and that involves performing an INSERT on the latter.\nThis patch is for teaching those INSERTs to use batched mode if\nallowed, which is currently prohibited. So with this patch, if an\nUPDATE moves 10 rows from a local partition to a remote partition,\nthen they will be inserted with a single INSERT command containing all\n10 rows, instead of 10 separate INSERT commands.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Mar 2021 17:42:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "\n\n\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, March 11, 2021 9:42 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Georgios,\n>\n> On Wed, Mar 10, 2021 at 9:30 PM Georgios gkokolatos@protonmail.com wrote:\n>\n> > I continued looking a bit at the patch, yet I am either failing to see fix or I am\n> > looking at the wrong thing. Please find attached a small repro of what my expectetions\n> > were.\n> > As you can see in the repro, I would expect the\n> > UPDATE local_root_remote_partitions SET a = 2;\n> > to move the tuples to remote_partition_2 on the same transaction.\n> > However this is not the case, with or without the patch.\n> > Is my expectation of this patch wrong?\n>\n> I think yes. We currently don't have the feature you are looking for\n> -- moving tuples from one remote partition to another remote\n> partition. This patch is not for adding that feature.\n\nThank you for correcting me.\n\n>\n> What we do support however is moving rows from a local partition to a\n> remote partition and that involves performing an INSERT on the latter.\n> This patch is for teaching those INSERTs to use batched mode if\n> allowed, which is currently prohibited. So with this patch, if an\n> UPDATE moves 10 rows from a local partition to a remote partition,\n> then they will be inserted with a single INSERT command containing all\n> 10 rows, instead of 10 separate INSERT commands.\n\nSo, if I understand correctly then in my previously attached repro I\nshould have written instead:\n\n CREATE TABLE local_root_remote_partitions (a int) PARTITION BY LIST ( a );\n CREATE TABLE\n local_root_local_partition_1\n PARTITION OF\n local_root_remote_partitions FOR VALUES IN (1);\n\n CREATE FOREIGN TABLE\n local_root_remote_partition_2\n PARTITION OF\n local_root_remote_partitions FOR VALUES IN (2)\n SERVER\n remote_server\n OPTIONS (\n table_name 'remote_partition_2',\n batch_size '10'\n );\n\n INSERT INTO local_root_remote_partitions VALUES (1), (1);\n -- Everything should be on local_root_local_partition_1 and on the same transaction\n SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n\n UPDATE local_root_remote_partitions SET a = 2;\n -- Everything should be on remote_partition_2 and on the same transaction\n SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n\n\nI am guessing that I am still wrong because the UPDATE operation above will\nfail due to the restrictions imposed in postgresBeginForeignInsert regarding\nUPDATES.\n\nWould it be too much to ask for the addition of a test case that will\ndemonstrate the change of behaviour found in patch?\n\nCheers,\n//Georgios\n\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n\n\n", "msg_date": "Thu, 11 Mar 2021 11:36:28 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Thu, Mar 11, 2021 at 8:36 PM <gkokolatos@pm.me> wrote:\n> On Thursday, March 11, 2021 9:42 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Mar 10, 2021 at 9:30 PM Georgios gkokolatos@protonmail.com wrote:\n> >\n> > > I continued looking a bit at the patch, yet I am either failing to see fix or I am\n> > > looking at the wrong thing. Please find attached a small repro of what my expectetions\n> > > were.\n> > > As you can see in the repro, I would expect the\n> > > UPDATE local_root_remote_partitions SET a = 2;\n> > > to move the tuples to remote_partition_2 on the same transaction.\n> > > However this is not the case, with or without the patch.\n> > > Is my expectation of this patch wrong?\n> >\n> > I think yes. We currently don't have the feature you are looking for\n> > -- moving tuples from one remote partition to another remote\n> > partition. This patch is not for adding that feature.\n>\n> Thank you for correcting me.\n> >\n> > What we do support however is moving rows from a local partition to a\n> > remote partition and that involves performing an INSERT on the latter.\n> > This patch is for teaching those INSERTs to use batched mode if\n> > allowed, which is currently prohibited. So with this patch, if an\n> > UPDATE moves 10 rows from a local partition to a remote partition,\n> > then they will be inserted with a single INSERT command containing all\n> > 10 rows, instead of 10 separate INSERT commands.\n>\n> So, if I understand correctly then in my previously attached repro I\n> should have written instead:\n>\n> CREATE TABLE local_root_remote_partitions (a int) PARTITION BY LIST ( a );\n> CREATE TABLE\n> local_root_local_partition_1\n> PARTITION OF\n> local_root_remote_partitions FOR VALUES IN (1);\n>\n> CREATE FOREIGN TABLE\n> local_root_remote_partition_2\n> PARTITION OF\n> local_root_remote_partitions FOR VALUES IN (2)\n> SERVER\n> remote_server\n> OPTIONS (\n> table_name 'remote_partition_2',\n> batch_size '10'\n> );\n>\n> INSERT INTO local_root_remote_partitions VALUES (1), (1);\n> -- Everything should be on local_root_local_partition_1 and on the same transaction\n> SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n>\n> UPDATE local_root_remote_partitions SET a = 2;\n> -- Everything should be on remote_partition_2 and on the same transaction\n> SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n>\n>\n> I am guessing that I am still wrong because the UPDATE operation above will\n> fail due to the restrictions imposed in postgresBeginForeignInsert regarding\n> UPDATES.\n\nYeah, for the move to work without hitting the restriction you\nmention, you will need to write the UPDATE query such that\nlocal_root_remote_partition_2 is not updated. For example, as\nfollows:\n\nUPDATE local_root_remote_partitions SET a = 2 WHERE a <> 2;\n\nWith this query, the remote partition is not one of the result\nrelations to be updated, so is able to escape that restriction.\n\n> Would it be too much to ask for the addition of a test case that will\n> demonstrate the change of behaviour found in patch.\n\nHmm, I don't think there's a way to display whether the INSERT done on\nthe remote partition as a part of an (tuple-moving) UPDATE used\nbatching or not. That's because that INSERT's state is hidden from\nEXPLAIN. Maybe we should change EXPLAIN to make it show such hidden\nINSERT's state (especially its batch size) under the original UPDATE\nnode, but I am not sure.\n\nBy the way, the test case added by commit 927f453a94106 does exercise\nthe code added by this patch, but as I said in the last paragraph, we\ncan't verify that by inspecting EXPLAIN output.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Mar 2021 11:45:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Friday, March 12, 2021 3:45 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Thu, Mar 11, 2021 at 8:36 PM gkokolatos@pm.me wrote:\n>\n> > On Thursday, March 11, 2021 9:42 AM, Amit Langote amitlangote09@gmail.com wrote:\n> >\n> > > On Wed, Mar 10, 2021 at 9:30 PM Georgios gkokolatos@protonmail.com wrote:\n> > >\n> > > > I continued looking a bit at the patch, yet I am either failing to see fix or I am\n> > > > looking at the wrong thing. Please find attached a small repro of what my expectetions\n> > > > were.\n> > > > As you can see in the repro, I would expect the\n> > > > UPDATE local_root_remote_partitions SET a = 2;\n> > > > to move the tuples to remote_partition_2 on the same transaction.\n> > > > However this is not the case, with or without the patch.\n> > > > Is my expectation of this patch wrong?\n> > >\n> > > I think yes. We currently don't have the feature you are looking for\n> > > -- moving tuples from one remote partition to another remote\n> > > partition. This patch is not for adding that feature.\n> >\n> > Thank you for correcting me.\n> >\n> > > What we do support however is moving rows from a local partition to a\n> > > remote partition and that involves performing an INSERT on the latter.\n> > > This patch is for teaching those INSERTs to use batched mode if\n> > > allowed, which is currently prohibited. So with this patch, if an\n> > > UPDATE moves 10 rows from a local partition to a remote partition,\n> > > then they will be inserted with a single INSERT command containing all\n> > > 10 rows, instead of 10 separate INSERT commands.\n> >\n> > So, if I understand correctly then in my previously attached repro I\n> > should have written instead:\n> >\n> > CREATE TABLE local_root_remote_partitions (a int) PARTITION BY LIST ( a );\n> > CREATE TABLE\n> > local_root_local_partition_1\n> > PARTITION OF\n> > local_root_remote_partitions FOR VALUES IN (1);\n> >\n> > CREATE FOREIGN TABLE\n> > local_root_remote_partition_2\n> > PARTITION OF\n> > local_root_remote_partitions FOR VALUES IN (2)\n> > SERVER\n> > remote_server\n> > OPTIONS (\n> > table_name 'remote_partition_2',\n> > batch_size '10'\n> > );\n> >\n> > INSERT INTO local_root_remote_partitions VALUES (1), (1);\n> > -- Everything should be on local_root_local_partition_1 and on the same transaction\n> > SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n> >\n> > UPDATE local_root_remote_partitions SET a = 2;\n> > -- Everything should be on remote_partition_2 and on the same transaction\n> > SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n> >\n> >\n> > I am guessing that I am still wrong because the UPDATE operation above will\n> > fail due to the restrictions imposed in postgresBeginForeignInsert regarding\n> > UPDATES.\n>\n> Yeah, for the move to work without hitting the restriction you\n> mention, you will need to write the UPDATE query such that\n> local_root_remote_partition_2 is not updated. For example, as\n> follows:\n>\n> UPDATE local_root_remote_partitions SET a = 2 WHERE a <> 2;\n>\n> With this query, the remote partition is not one of the result\n> relations to be updated, so is able to escape that restriction.\n\nExcellent. Thank you for the explanation and patience.\n\n>\n> > Would it be too much to ask for the addition of a test case that will\n> > demonstrate the change of behaviour found in patch.\n>\n> Hmm, I don't think there's a way to display whether the INSERT done on\n> the remote partition as a part of an (tuple-moving) UPDATE used\n> batching or not. That's because that INSERT's state is hidden from\n> EXPLAIN. Maybe we should change EXPLAIN to make it show such hidden\n> INSERT's state (especially its batch size) under the original UPDATE\n> node, but I am not sure.\n\nYeah, there does not seem to be a way for explain to do show that information\nwith the current code.\n\n>\n> By the way, the test case added by commit 927f453a94106 does exercise\n> the code added by this patch, but as I said in the last paragraph, we\n> can't verify that by inspecting EXPLAIN output.\n\nI never doubted that. However, there is a difference. The current patch\nchanges the query to be executed in the remote from:\n\n INSERT INTO <snip> VALUES ($1);\nto:\n INSERT INTO <snip> VALUES ($1), ($2) ... ($n);\n\nWhen this patch gets in, it would be very helpful to know that subsequent\ncode changes will not cause regressions. So I was wondering if there is\na way to craft a test case that would break for the code in 927f453a94106\nyet succeed with the current patch.\n\nI attach version 2 of my small reproduction. I am under the impression that\nin this version, examining the value of cmin in the remote table should\ngive an indication of whether the remote received a multiple insert queries\nwith a single value, or a single insert query with multiple values.\n\nOr is this a wrong assumption of mine?\n\nCheers,\n//Georgios\n\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Amit Langote\n> EDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Mar 2021 10:59:19 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi Georgios,\n\nOn Fri, Mar 12, 2021 at 7:59 PM <gkokolatos@pm.me> wrote:\n> On Friday, March 12, 2021 3:45 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Mar 11, 2021 at 8:36 PM gkokolatos@pm.me wrote:\n> > > On Thursday, March 11, 2021 9:42 AM, Amit Langote amitlangote09@gmail.com wrote:\n> > > > What we do support however is moving rows from a local partition to a\n> > > > remote partition and that involves performing an INSERT on the latter.\n> > > > This patch is for teaching those INSERTs to use batched mode if\n> > > > allowed, which is currently prohibited. So with this patch, if an\n> > > > UPDATE moves 10 rows from a local partition to a remote partition,\n> > > > then they will be inserted with a single INSERT command containing all\n> > > > 10 rows, instead of 10 separate INSERT commands.\n> > >\n> > > So, if I understand correctly then in my previously attached repro I\n> > > should have written instead:\n> > >\n> > > CREATE TABLE local_root_remote_partitions (a int) PARTITION BY LIST ( a );\n> > > CREATE TABLE\n> > > local_root_local_partition_1\n> > > PARTITION OF\n> > > local_root_remote_partitions FOR VALUES IN (1);\n> > >\n> > > CREATE FOREIGN TABLE\n> > > local_root_remote_partition_2\n> > > PARTITION OF\n> > > local_root_remote_partitions FOR VALUES IN (2)\n> > > SERVER\n> > > remote_server\n> > > OPTIONS (\n> > > table_name 'remote_partition_2',\n> > > batch_size '10'\n> > > );\n> > >\n> > > INSERT INTO local_root_remote_partitions VALUES (1), (1);\n> > > -- Everything should be on local_root_local_partition_1 and on the same transaction\n> > > SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n> > >\n> > > UPDATE local_root_remote_partitions SET a = 2;\n> > > -- Everything should be on remote_partition_2 and on the same transaction\n> > > SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n> > >\n> > >\n> > > I am guessing that I am still wrong because the UPDATE operation above will\n> > > fail due to the restrictions imposed in postgresBeginForeignInsert regarding\n> > > UPDATES.\n> >\n> > Yeah, for the move to work without hitting the restriction you\n> > mention, you will need to write the UPDATE query such that\n> > local_root_remote_partition_2 is not updated. For example, as\n> > follows:\n> >\n> > UPDATE local_root_remote_partitions SET a = 2 WHERE a <> 2;\n> >\n> > With this query, the remote partition is not one of the result\n> > relations to be updated, so is able to escape that restriction.\n>\n> Excellent. Thank you for the explanation and patience.\n>\n> > > Would it be too much to ask for the addition of a test case that will\n> > > demonstrate the change of behaviour found in patch.\n> >\n> > Hmm, I don't think there's a way to display whether the INSERT done on\n> > the remote partition as a part of an (tuple-moving) UPDATE used\n> > batching or not. That's because that INSERT's state is hidden from\n> > EXPLAIN. Maybe we should change EXPLAIN to make it show such hidden\n> > INSERT's state (especially its batch size) under the original UPDATE\n> > node, but I am not sure.\n>\n> Yeah, there does not seem to be a way for explain to do show that information\n> with the current code.\n>\n> > By the way, the test case added by commit 927f453a94106 does exercise\n> > the code added by this patch, but as I said in the last paragraph, we\n> > can't verify that by inspecting EXPLAIN output.\n>\n> I never doubted that. However, there is a difference. The current patch\n> changes the query to be executed in the remote from:\n>\n> INSERT INTO <snip> VALUES ($1);\n> to:\n> INSERT INTO <snip> VALUES ($1), ($2) ... ($n);\n>\n> When this patch gets in, it would be very helpful to know that subsequent\n> code changes will not cause regressions. So I was wondering if there is\n> a way to craft a test case that would break for the code in 927f453a94106\n> yet succeed with the current patch.\n\nThe test case \"works\" both before and after the patch, with the\ndifference being in the form of the remote query. It seems to me\nthough that you do get that.\n\n> I attach version 2 of my small reproduction. I am under the impression that\n> in this version, examining the value of cmin in the remote table should\n> give an indication of whether the remote received a multiple insert queries\n> with a single value, or a single insert query with multiple values.\n>\n> Or is this a wrong assumption of mine?\n\nNo, I think you have a good idea here.\n\nI've adjusted that test case to confirm that the batching indeed works\nby checking cmin of the moved rows, as you suggest. Please check the\nattached updated patch.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 16 Mar 2021 14:13:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "\n\n\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, March 16, 2021 6:13 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Georgios,\n>\n> On Fri, Mar 12, 2021 at 7:59 PM gkokolatos@pm.me wrote:\n>\n> > On Friday, March 12, 2021 3:45 AM, Amit Langote amitlangote09@gmail.com wrote:\n> >\n> > > On Thu, Mar 11, 2021 at 8:36 PM gkokolatos@pm.me wrote:\n> > >\n> > > > On Thursday, March 11, 2021 9:42 AM, Amit Langote amitlangote09@gmail.com wrote:\n> > > >\n> > > > > What we do support however is moving rows from a local partition to a\n> > > > > remote partition and that involves performing an INSERT on the latter.\n> > > > > This patch is for teaching those INSERTs to use batched mode if\n> > > > > allowed, which is currently prohibited. So with this patch, if an\n> > > > > UPDATE moves 10 rows from a local partition to a remote partition,\n> > > > > then they will be inserted with a single INSERT command containing all\n> > > > > 10 rows, instead of 10 separate INSERT commands.\n> > > >\n> > > > So, if I understand correctly then in my previously attached repro I\n> > > > should have written instead:\n> > > >\n> > > > CREATE TABLE local_root_remote_partitions (a int) PARTITION BY LIST ( a );\n> > > > CREATE TABLE\n> > > > local_root_local_partition_1\n> > > > PARTITION OF\n> > > > local_root_remote_partitions FOR VALUES IN (1);\n> > > >\n> > > > CREATE FOREIGN TABLE\n> > > > local_root_remote_partition_2\n> > > > PARTITION OF\n> > > > local_root_remote_partitions FOR VALUES IN (2)\n> > > > SERVER\n> > > > remote_server\n> > > > OPTIONS (\n> > > > table_name 'remote_partition_2',\n> > > > batch_size '10'\n> > > > );\n> > > >\n> > > > INSERT INTO local_root_remote_partitions VALUES (1), (1);\n> > > > -- Everything should be on local_root_local_partition_1 and on the same transaction\n> > > > SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n> > > >\n> > > > UPDATE local_root_remote_partitions SET a = 2;\n> > > > -- Everything should be on remote_partition_2 and on the same transaction\n> > > > SELECT ctid, xmin, xmax, cmax, tableoid::regclass, a FROM local_root_remote_partitions;\n> > > >\n> > > >\n> > > > I am guessing that I am still wrong because the UPDATE operation above will\n> > > > fail due to the restrictions imposed in postgresBeginForeignInsert regarding\n> > > > UPDATES.\n> > >\n> > > Yeah, for the move to work without hitting the restriction you\n> > > mention, you will need to write the UPDATE query such that\n> > > local_root_remote_partition_2 is not updated. For example, as\n> > > follows:\n> > > UPDATE local_root_remote_partitions SET a = 2 WHERE a <> 2;\n> > > With this query, the remote partition is not one of the result\n> > > relations to be updated, so is able to escape that restriction.\n> >\n> > Excellent. Thank you for the explanation and patience.\n> >\n> > > > Would it be too much to ask for the addition of a test case that will\n> > > > demonstrate the change of behaviour found in patch.\n> > >\n> > > Hmm, I don't think there's a way to display whether the INSERT done on\n> > > the remote partition as a part of an (tuple-moving) UPDATE used\n> > > batching or not. That's because that INSERT's state is hidden from\n> > > EXPLAIN. Maybe we should change EXPLAIN to make it show such hidden\n> > > INSERT's state (especially its batch size) under the original UPDATE\n> > > node, but I am not sure.\n> >\n> > Yeah, there does not seem to be a way for explain to do show that information\n> > with the current code.\n> >\n> > > By the way, the test case added by commit 927f453a94106 does exercise\n> > > the code added by this patch, but as I said in the last paragraph, we\n> > > can't verify that by inspecting EXPLAIN output.\n> >\n> > I never doubted that. However, there is a difference. The current patch\n> > changes the query to be executed in the remote from:\n> > INSERT INTO <snip> VALUES ($1);\n> > to:\n> > INSERT INTO <snip> VALUES ($1), ($2) ... ($n);\n> > When this patch gets in, it would be very helpful to know that subsequent\n> > code changes will not cause regressions. So I was wondering if there is\n> > a way to craft a test case that would break for the code in 927f453a94106\n> > yet succeed with the current patch.\n>\n> The test case \"works\" both before and after the patch, with the\n> difference being in the form of the remote query. It seems to me\n> though that you do get that.\n>\n> > I attach version 2 of my small reproduction. I am under the impression that\n> > in this version, examining the value of cmin in the remote table should\n> > give an indication of whether the remote received a multiple insert queries\n> > with a single value, or a single insert query with multiple values.\n> > Or is this a wrong assumption of mine?\n>\n> No, I think you have a good idea here.\n\nThank you.\n\n>\n> I've adjusted that test case to confirm that the batching indeed works\n> by checking cmin of the moved rows, as you suggest. Please check the\n> attached updated patch.\n\nExcellent. The patch in the current version with the added test seems\nready to me.\n\nI would still vote to have accessor functions instead of exposing the\nwhole PartitionTupleRouting struct, but I am not going to hold a too\nstrong stance about it.\n\nIf you agree with me, please provide an updated version of the patch.\nOtherwise let it be known and I will flag the patch as RfC in the\ncommitfest app.\n\nCheers,\n//Georgios\n\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n\n\n", "msg_date": "Tue, 16 Mar 2021 08:11:57 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi Georgios,\n\nOn Tue, Mar 16, 2021 at 5:12 PM <gkokolatos@pm.me> wrote:\n> On Tuesday, March 16, 2021 6:13 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Mar 12, 2021 at 7:59 PM gkokolatos@pm.me wrote:\n> > > On Friday, March 12, 2021 3:45 AM, Amit Langote amitlangote09@gmail.com wrote:\n> > > > By the way, the test case added by commit 927f453a94106 does exercise\n> > > > the code added by this patch, but as I said in the last paragraph, we\n> > > > can't verify that by inspecting EXPLAIN output.\n> > >\n> > > I never doubted that. However, there is a difference. The current patch\n> > > changes the query to be executed in the remote from:\n> > > INSERT INTO <snip> VALUES ($1);\n> > > to:\n> > > INSERT INTO <snip> VALUES ($1), ($2) ... ($n);\n> > > When this patch gets in, it would be very helpful to know that subsequent\n> > > code changes will not cause regressions. So I was wondering if there is\n> > > a way to craft a test case that would break for the code in 927f453a94106\n> > > yet succeed with the current patch.\n> >\n> > The test case \"works\" both before and after the patch, with the\n> > difference being in the form of the remote query. It seems to me\n> > though that you do get that.\n> >\n> > > I attach version 2 of my small reproduction. I am under the impression that\n> > > in this version, examining the value of cmin in the remote table should\n> > > give an indication of whether the remote received a multiple insert queries\n> > > with a single value, or a single insert query with multiple values.\n> > > Or is this a wrong assumption of mine?\n> >\n> > No, I think you have a good idea here.\n> >\n> > I've adjusted that test case to confirm that the batching indeed works\n> > by checking cmin of the moved rows, as you suggest. Please check the\n> > attached updated patch.\n>\n> Excellent. The patch in the current version with the added test seems\n> ready to me.\n\nThanks for quickly checking that.\n\n> I would still vote to have accessor functions instead of exposing the\n> whole PartitionTupleRouting struct, but I am not going to hold a too\n> strong stance about it.\n\nI as well, although I would wait for others to chime in before\nupdating the patch that way.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Mar 2021 17:59:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "\n\n\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, March 16, 2021 9:59 AM, Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Hi Georgios,\n>\n> On Tue, Mar 16, 2021 at 5:12 PM gkokolatos@pm.me wrote:\n>\n> > On Tuesday, March 16, 2021 6:13 AM, Amit Langote amitlangote09@gmail.com wrote:\n> >\n> > > On Fri, Mar 12, 2021 at 7:59 PM gkokolatos@pm.me wrote:\n> > >\n> > > > On Friday, March 12, 2021 3:45 AM, Amit Langote amitlangote09@gmail.com wrote:\n> > > >\n> > > > > By the way, the test case added by commit 927f453a94106 does exercise\n> > > > > the code added by this patch, but as I said in the last paragraph, we\n> > > > > can't verify that by inspecting EXPLAIN output.\n> > > >\n> > > > I never doubted that. However, there is a difference. The current patch\n> > > > changes the query to be executed in the remote from:\n> > > > INSERT INTO <snip> VALUES ($1);\n> > > > to:\n> > > > INSERT INTO <snip> VALUES ($1), ($2) ... ($n);\n> > > > When this patch gets in, it would be very helpful to know that subsequent\n> > > > code changes will not cause regressions. So I was wondering if there is\n> > > > a way to craft a test case that would break for the code in 927f453a94106\n> > > > yet succeed with the current patch.\n> > >\n> > > The test case \"works\" both before and after the patch, with the\n> > > difference being in the form of the remote query. It seems to me\n> > > though that you do get that.\n> > >\n> > > > I attach version 2 of my small reproduction. I am under the impression that\n> > > > in this version, examining the value of cmin in the remote table should\n> > > > give an indication of whether the remote received a multiple insert queries\n> > > > with a single value, or a single insert query with multiple values.\n> > > > Or is this a wrong assumption of mine?\n> > >\n> > > No, I think you have a good idea here.\n> > > I've adjusted that test case to confirm that the batching indeed works\n> > > by checking cmin of the moved rows, as you suggest. Please check the\n> > > attached updated patch.\n> >\n> > Excellent. The patch in the current version with the added test seems\n> > ready to me.\n>\n> Thanks for quickly checking that.\n\nA pleasure.\n\n>\n> > I would still vote to have accessor functions instead of exposing the\n> > whole PartitionTupleRouting struct, but I am not going to hold a too\n> > strong stance about it.\n>\n> I as well, although I would wait for others to chime in before\n> updating the patch that way.\n\nFair enough.\n\nStatus updated to RfC in the commitfest app.\n\n>\n> ----------------------------------------------------------------------------------------------\n>\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n\n\n", "msg_date": "Tue, 16 Mar 2021 09:13:51 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Mar 16, 2021 at 6:13 PM <gkokolatos@pm.me> wrote:\n> Status updated to RfC in the commitfest app.\n\nPatch fails to apply per cfbot, so rebased.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 5 Apr 2021 00:05:43 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi,\nIn the description:\n\ncross-partition update of partitioned tables can't use batching\nbecause ExecInitRoutingInfo() which initializes the insert target\n\n'which' should be dropped since 'because' should start a sentence.\n\n+-- Check that batched inserts also works for inserts made during\n\ninserts also works -> inserts also work\n\n+ Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n+ RELKIND_PARTITIONED_TABLE);\n\nThe level of nested field accesses is quite deep. If the assertion fails,\nit would be hard to know which field is null.\nMaybe use several assertions:\n Assert(node->rootResultRelInfo)\n Assert(node->rootResultRelInfo->ri_RelationDesc)\n Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n...\n\nCheers\n\nOn Sun, Apr 4, 2021 at 8:06 AM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> On Tue, Mar 16, 2021 at 6:13 PM <gkokolatos@pm.me> wrote:\n> > Status updated to RfC in the commitfest app.\n>\n> Patch fails to apply per cfbot, so rebased.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n\nHi,In the description:cross-partition update of partitioned tables can't use batchingbecause ExecInitRoutingInfo() which initializes the insert target'which' should be dropped since 'because' should start a sentence.+-- Check that batched inserts also works for inserts made duringinserts also works -> inserts also work+       Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind ==+              RELKIND_PARTITIONED_TABLE);The level of nested field accesses is quite deep. If the assertion fails, it would be hard to know which field is null.Maybe use several assertions:       Assert(node->rootResultRelInfo)       Assert(node->rootResultRelInfo->ri_RelationDesc)       Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind == ...CheersOn Sun, Apr 4, 2021 at 8:06 AM Amit Langote <amitlangote09@gmail.com> wrote:On Tue, Mar 16, 2021 at 6:13 PM <gkokolatos@pm.me> wrote:\n> Status updated to RfC in the commitfest app.\n\nPatch fails to apply per cfbot, so rebased.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Sun, 4 Apr 2021 09:19:27 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Mon, Apr 5, 2021 at 1:16 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> In the description:\n>\n> cross-partition update of partitioned tables can't use batching\n> because ExecInitRoutingInfo() which initializes the insert target\n>\n> 'which' should be dropped since 'because' should start a sentence.\n>\n> +-- Check that batched inserts also works for inserts made during\n>\n> inserts also works -> inserts also work\n>\n> + Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n> + RELKIND_PARTITIONED_TABLE);\n>\n> The level of nested field accesses is quite deep. If the assertion fails, it would be hard to know which field is null.\n> Maybe use several assertions:\n> Assert(node->rootResultRelInfo)\n> Assert(node->rootResultRelInfo->ri_RelationDesc)\n> Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind == ...\n\nThanks for taking a look at this.\n\nWhile I agree about having the 1st Assert you suggest, I don't think\nthis code needs the 2nd one, because its failure would very likely be\ndue to a problem in some totally unrelated code.\n\nUpdated patch attached. I had to adjust the test case a little bit to\naccount for the changes of 86dc90056d, something I failed to notice\nyesterday. Also, I expanded the test case added in postgres_fdw.sql a\nbit to show the batching in action more explicitly.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Apr 2021 18:37:48 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Apr 6, 2021 at 3:08 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Apr 5, 2021 at 1:16 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> > Hi,\n> > In the description:\n> >\n> > cross-partition update of partitioned tables can't use batching\n> > because ExecInitRoutingInfo() which initializes the insert target\n> >\n> > 'which' should be dropped since 'because' should start a sentence.\n> >\n> > +-- Check that batched inserts also works for inserts made during\n> >\n> > inserts also works -> inserts also work\n> >\n> > + Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind ==\n> > + RELKIND_PARTITIONED_TABLE);\n> >\n> > The level of nested field accesses is quite deep. If the assertion fails, it would be hard to know which field is null.\n> > Maybe use several assertions:\n> > Assert(node->rootResultRelInfo)\n> > Assert(node->rootResultRelInfo->ri_RelationDesc)\n> > Assert(node->rootResultRelInfo->ri_RelationDesc->rd_rel->relkind == ...\n>\n> Thanks for taking a look at this.\n>\n> While I agree about having the 1st Assert you suggest, I don't think\n> this code needs the 2nd one, because its failure would very likely be\n> due to a problem in some totally unrelated code.\n>\n> Updated patch attached. I had to adjust the test case a little bit to\n> account for the changes of 86dc90056d, something I failed to notice\n> yesterday. Also, I expanded the test case added in postgres_fdw.sql a\n> bit to show the batching in action more explicitly.\n\nSome minor comments:\n1) don't we need order by clause for the selects in the tests added?\n+SELECT tableoid::regclass, * FROM batch_cp_upd_test;\n+SELECT cmin, * FROM batch_cp_upd_test1;\n\n2) typo - it is \"should\" not \"shoud\"\n+-- cmin shoud be different across rows, because each one would be inserted\n\n3) will the cmin in the output always be the same?\n+SELECT cmin, * FROM batch_cp_upd_test3;\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 15:18:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Apr 6, 2021 at 6:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Tue, Apr 6, 2021 at 3:08 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Updated patch attached. I had to adjust the test case a little bit to\n> > account for the changes of 86dc90056d, something I failed to notice\n> > yesterday. Also, I expanded the test case added in postgres_fdw.sql a\n> > bit to show the batching in action more explicitly.\n>\n> Some minor comments:\n\nThanks for the review.\n\n> 1) don't we need order by clause for the selects in the tests added?\n> +SELECT tableoid::regclass, * FROM batch_cp_upd_test;\n\nGood point. It wasn't necessary before, but it is after the test\nexpansion, so added.\n\n> 3) will the cmin in the output always be the same?\n> +SELECT cmin, * FROM batch_cp_upd_test3;\n\nTBH, I am not so sure. Maybe it's not a good idea to rely on cmin\nafter all. I've rewritten the tests to use a different method of\ndetermining if a single or multiple insert commands were used in\nmoving rows into foreign partitions.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Apr 2021 22:07:41 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Apr 6, 2021 at 6:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > 3) will the cmin in the output always be the same?\n> > +SELECT cmin, * FROM batch_cp_upd_test3;\n>\n> TBH, I am not so sure. Maybe it's not a good idea to rely on cmin\n> after all. I've rewritten the tests to use a different method of\n> determining if a single or multiple insert commands were used in\n> moving rows into foreign partitions.\n\nThanks! It looks good!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Apr 2021 19:22:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Apr 6, 2021 at 10:52 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Apr 6, 2021 at 6:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > 3) will the cmin in the output always be the same?\n> > > +SELECT cmin, * FROM batch_cp_upd_test3;\n> >\n> > TBH, I am not so sure. Maybe it's not a good idea to rely on cmin\n> > after all. I've rewritten the tests to use a different method of\n> > determining if a single or multiple insert commands were used in\n> > moving rows into foreign partitions.\n>\n> Thanks! It looks good!\n\nThanks for checking. I'll mark this as RfC.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 10:49:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "> > Thanks! It looks good!\r\n> \r\n> Thanks for checking. I'll mark this as RfC.\r\n\r\nHi,\r\n\r\nThe patch cannot be applied to the latest head branch, it will be nice if you can rebase it.\r\nAnd when looking into the patch, I have some comments on it.\r\n\r\n1)\r\nIIRC, After the commit c5b7ba4, the initialization of mt_partition_tuple_routing was postponed out of ExecInitModifyTable.\r\nSo, the following if-test use \"proute\" which is initialized at the beginning of the ExecModifyTable() could be out of date.\r\nAnd the regression test of postgres_fdw failed with the patch after the commit c5b7ba4.\r\n\r\n+\t * If the query's main target relation is a partitioned table, any inserts\r\n+\t * would have been performed using tuple routing, so use the appropriate\r\n+\t * set of target relations. Note that this also covers any inserts\r\n+\t * performed during cross-partition UPDATEs that would have occurred\r\n+\t * through tuple routing.\r\n \t */\r\n \tif (proute)\r\n...\r\n\r\nIt seems we should get the mt_partition_tuple_routing just before the if-test.\r\n\r\n2)\r\n+\t\tforeach(lc, estate->es_opened_result_relations)\r\n+\t\t{\r\n+\t\t\tresultRelInfo = lfirst(lc);\r\n+\t\t\tif (resultRelInfo &&\r\n\r\nI am not sure do we need to check if resultRelInfo == NULL because the the existing code did not check it.\r\nAnd if we need to check it, it might be better use \"if (resultRelInfo == NULL &&...\"\r\n\r\n3)\r\n+\tif (fmstate && fmstate->aux_fmstate != NULL)\r\n+\t\tfmstate = fmstate->aux_fmstate;\r\n\r\nIt might be better to write like \" if (fmstate != NULL && fmstate->aux_fmstate != NULL)\".\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n", "msg_date": "Fri, 7 May 2021 09:39:37 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Fri, May 7, 2021 at 6:39 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > > Thanks! It looks good!\n> >\n> > Thanks for checking. I'll mark this as RfC.\n>\n> Hi,\n>\n> The patch cannot be applied to the latest head branch, it will be nice if you can rebase it.\n\nThanks, done.\n\n> And when looking into the patch, I have some comments on it.\n>\n> 1)\n> IIRC, After the commit c5b7ba4, the initialization of mt_partition_tuple_routing was postponed out of ExecInitModifyTable.\n> So, the following if-test use \"proute\" which is initialized at the beginning of the ExecModifyTable() could be out of date.\n> And the regression test of postgres_fdw failed with the patch after the commit c5b7ba4.\n>\n> + * If the query's main target relation is a partitioned table, any inserts\n> + * would have been performed using tuple routing, so use the appropriate\n> + * set of target relations. Note that this also covers any inserts\n> + * performed during cross-partition UPDATEs that would have occurred\n> + * through tuple routing.\n> */\n> if (proute)\n> ...\n>\n> It seems we should get the mt_partition_tuple_routing just before the if-test.\n\nThat's a good observation. Fixed.\n\n> 2)\n> + foreach(lc, estate->es_opened_result_relations)\n> + {\n> + resultRelInfo = lfirst(lc);\n> + if (resultRelInfo &&\n>\n> I am not sure do we need to check if resultRelInfo == NULL because the the existing code did not check it.\n> And if we need to check it, it might be better use \"if (resultRelInfo == NULL &&...\"\n\nI don't quite remember why I added that test, because nowhere do we\nadd a NULL value to es_opened_result_relations. Actually, we can even\nAssert(resultRelInfo != NULL) here.\n\n> 3)\n> + if (fmstate && fmstate->aux_fmstate != NULL)\n> + fmstate = fmstate->aux_fmstate;\n>\n> It might be better to write like \" if (fmstate != NULL && fmstate->aux_fmstate != NULL)\".\n\nSure, done. Actually, there's a if (fmstate) statement right below\nthe code being added, which I fixed to match the style used by the new\ncode.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 10 May 2021 15:58:56 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> [ v6-0001-Allow-batching-of-inserts-during-cross-partition-.patch ]\n\nPer the cfbot, this isn't applying anymore, so I'm setting it back\nto Waiting on Author.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Jul 2021 12:39:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Fri, Jul 2, 2021 at 1:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > [ v6-0001-Allow-batching-of-inserts-during-cross-partition-.patch ]\n>\n> Per the cfbot, this isn't applying anymore, so I'm setting it back\n> to Waiting on Author.\n\nRebased patch attached. Thanks for the reminder.\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 2 Jul 2021 11:05:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Fri, Jul 2, 2021 at 7:35 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Jul 2, 2021 at 1:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > [ v6-0001-Allow-batching-of-inserts-during-cross-partition-.patch ]\n> >\n> > Per the cfbot, this isn't applying anymore, so I'm setting it back\n> > to Waiting on Author.\n>\n> Rebased patch attached. Thanks for the reminder.\n\nOne of the test postgres_fdw has failed, can you please post an\nupdated patch for the fix:\ntest postgres_fdw ... FAILED (test process exited with\nexit code 2) 7264 ms\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 22 Jul 2021 10:48:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Thu, Jul 22, 2021 at 2:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> On Fri, Jul 2, 2021 at 7:35 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Jul 2, 2021 at 1:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Langote <amitlangote09@gmail.com> writes:\n> > > > [ v6-0001-Allow-batching-of-inserts-during-cross-partition-.patch ]\n> > >\n> > > Per the cfbot, this isn't applying anymore, so I'm setting it back\n> > > to Waiting on Author.\n> >\n> > Rebased patch attached. Thanks for the reminder.\n>\n> One of the test postgres_fdw has failed, can you please post an\n> updated patch for the fix:\n> test postgres_fdw ... FAILED (test process exited with\n> exit code 2) 7264 ms\n\nThanks Vignesh.\n\nI found a problem with the underlying batching code that caused this\nfailure and have just reported it here:\n\nhttps://www.postgresql.org/message-id/CA%2BHiwqEWd5B0-e-RvixGGUrNvGkjH2s4m95%3DJcwUnyV%3Df0rAKQ%40mail.gmail.com\n\nHere's v8, including the patch for the above problem.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Jul 2021 11:32:36 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Jul 27, 2021 at 11:32 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jul 22, 2021 at 2:18 PM vignesh C <vignesh21@gmail.com> wrote:\n> > One of the test postgres_fdw has failed, can you please post an\n> > updated patch for the fix:\n> > test postgres_fdw ... FAILED (test process exited with\n> > exit code 2) 7264 ms\n>\n> Thanks Vignesh.\n>\n> I found a problem with the underlying batching code that caused this\n> failure and have just reported it here:\n>\n> https://www.postgresql.org/message-id/CA%2BHiwqEWd5B0-e-RvixGGUrNvGkjH2s4m95%3DJcwUnyV%3Df0rAKQ%40mail.gmail.com\n>\n> Here's v8, including the patch for the above problem.\n\nTomas committed the bug-fix, so attaching a rebased version of the\npatch for $subject.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 24 Aug 2021 12:03:59 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi,\n\nOn 2021-08-24 12:03:59 +0900, Amit Langote wrote:\n> Tomas committed the bug-fix, so attaching a rebased version of the\n> patch for $subject.\n\nThis patch is in the current CF, but doesn't apply: http://cfbot.cputube.org/patch_37_2992.log\nmarked the entry as waiting-on-author.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:30:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Mar 22, 2022 at 9:30 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-08-24 12:03:59 +0900, Amit Langote wrote:\n> > Tomas committed the bug-fix, so attaching a rebased version of the\n> > patch for $subject.\n>\n> This patch is in the current CF, but doesn't apply: http://cfbot.cputube.org/patch_37_2992.log\n> marked the entry as waiting-on-author.\n\nThanks for the heads up.\n\nRebased to fix a minor conflict with some recently committed\nnodeModifyTable.c changes.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 22 Mar 2022 10:17:08 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Amit-san,\n\nOn Tue, Mar 22, 2022 at 10:17 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Rebased to fix a minor conflict with some recently committed\n> nodeModifyTable.c changes.\n\nApologies for not having reviewed the patch. Here are some review comments:\n\n* The patch conflicts with commit ffbb7e65a, so please update the\npatch. (That commit would cause an API break, so I am planning to\napply a fix to HEAD as well [1].) That commit fixed the handling of\npending inserts, which I think would eliminate the need to do this:\n\n * ExecModifyTable(), when inserting any remaining batched tuples,\n must look at the correct set of ResultRelInfos that would've been\n used by such inserts, because failing to do so would result in those\n tuples not actually getting inserted. To fix, ExecModifyTable() is\n now made to get the ResultRelInfos from the PartitionTupleRouting\n data structure which contains the ResultRelInfo that would be used by\n those internal inserts. To allow nodeModifyTable.c look inside\n PartitionTupleRouting, its definition, which was previously local to\n execPartition.c, is exposed via execPartition.h.\n\n* In postgresGetForeignModifyBatchSize():\n\n /*\n- * Should never get called when the insert is being performed as part of a\n- * row movement operation.\n+ * Use the auxiliary state if any; see postgresBeginForeignInsert() for\n+ * details on what it represents.\n */\n- Assert(fmstate == NULL || fmstate->aux_fmstate == NULL);\n+ if (fmstate != NULL && fmstate->aux_fmstate != NULL)\n+ fmstate = fmstate->aux_fmstate;\n\nI might be missing something, but I think we should leave the Assert\nas-is, because we still disallow to move rows to another foreign-table\npartition that is also an UPDATE subplan result relation, which means\nthat any fmstate should have fmstate->aux_fmstate=NULL.\n\n* Also in that function:\n\n- if (fmstate)\n+ if (fmstate != NULL)\n\nThis is correct, but I would vote for leaving that as-is, to make\nback-patching easy.\n\nThat is all I have for now. I will mark this as Waiting on Author.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK17rmXEY3BL%3DAq71L8UZv5f-mz%3DuxJkz1kMnfSSY%2BpFe-A%40mail.gmail.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 18:48:54 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi Fujita-san,\n\nOn Tue, Dec 6, 2022 at 6:47 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, Mar 22, 2022 at 10:17 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Rebased to fix a minor conflict with some recently committed\n> > nodeModifyTable.c changes.\n>\n> Apologies for not having reviewed the patch. Here are some review comments:\n\nNo problem and thanks for taking a look.\n\n> * The patch conflicts with commit ffbb7e65a, so please update the\n> patch. (That commit would cause an API break, so I am planning to\n> apply a fix to HEAD as well [1].) That commit fixed the handling of\n> pending inserts, which I think would eliminate the need to do this:\n>\n> * ExecModifyTable(), when inserting any remaining batched tuples,\n> must look at the correct set of ResultRelInfos that would've been\n> used by such inserts, because failing to do so would result in those\n> tuples not actually getting inserted. To fix, ExecModifyTable() is\n> now made to get the ResultRelInfos from the PartitionTupleRouting\n> data structure which contains the ResultRelInfo that would be used by\n> those internal inserts. To allow nodeModifyTable.c look inside\n> PartitionTupleRouting, its definition, which was previously local to\n> execPartition.c, is exposed via execPartition.h.\n\nAh, I see. Removed those hunks.\n\n> * In postgresGetForeignModifyBatchSize():\n>\n> /*\n> - * Should never get called when the insert is being performed as part of a\n> - * row movement operation.\n> + * Use the auxiliary state if any; see postgresBeginForeignInsert() for\n> + * details on what it represents.\n> */\n> - Assert(fmstate == NULL || fmstate->aux_fmstate == NULL);\n> + if (fmstate != NULL && fmstate->aux_fmstate != NULL)\n> + fmstate = fmstate->aux_fmstate;\n>\n> I might be missing something, but I think we should leave the Assert\n> as-is, because we still disallow to move rows to another foreign-table\n> partition that is also an UPDATE subplan result relation, which means\n> that any fmstate should have fmstate->aux_fmstate=NULL.\n\nHmm, yes. I forgot that 86dc90056df effectively disabled *all*\nattempts of inserting into foreign partitions that are also UPDATE\ntarget relations, so you are correct that fmstate->aux_fmstate would\nnever be set when entering this function.\n\nThat means this functionality is only useful for foreign partitions\nthat are not also being updated by the original UPDATE.\n\nI've reinstated the Assert, removed the if block as it's useless, and\nupdated the comment a bit to clarify the restriction a bit.\n\n> * Also in that function:\n>\n> - if (fmstate)\n> + if (fmstate != NULL)\n>\n> This is correct, but I would vote for leaving that as-is, to make\n> back-patching easy.\n\nRemoved this hunk.\n\n> That is all I have for now. I will mark this as Waiting on Author.\n\nUpdated patch attached.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 8 Dec 2022 16:59:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Amit-san,\n\nOn Thu, Dec 8, 2022 at 5:00 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Dec 6, 2022 at 6:47 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > * In postgresGetForeignModifyBatchSize():\n> >\n> > /*\n> > - * Should never get called when the insert is being performed as part of a\n> > - * row movement operation.\n> > + * Use the auxiliary state if any; see postgresBeginForeignInsert() for\n> > + * details on what it represents.\n> > */\n> > - Assert(fmstate == NULL || fmstate->aux_fmstate == NULL);\n> > + if (fmstate != NULL && fmstate->aux_fmstate != NULL)\n> > + fmstate = fmstate->aux_fmstate;\n> >\n> > I might be missing something, but I think we should leave the Assert\n> > as-is, because we still disallow to move rows to another foreign-table\n> > partition that is also an UPDATE subplan result relation, which means\n> > that any fmstate should have fmstate->aux_fmstate=NULL.\n>\n> Hmm, yes. I forgot that 86dc90056df effectively disabled *all*\n> attempts of inserting into foreign partitions that are also UPDATE\n> target relations, so you are correct that fmstate->aux_fmstate would\n> never be set when entering this function.\n>\n> That means this functionality is only useful for foreign partitions\n> that are not also being updated by the original UPDATE.\n\nYeah, I think so too.\n\n> I've reinstated the Assert, removed the if block as it's useless, and\n> updated the comment a bit to clarify the restriction a bit.\n\nLooks good to me.\n\n> > * Also in that function:\n> >\n> > - if (fmstate)\n> > + if (fmstate != NULL)\n> >\n> > This is correct, but I would vote for leaving that as-is, to make\n> > back-patching easy.\n>\n> Removed this hunk.\n\nThanks!\n\n> Updated patch attached.\n\nThanks for the patch! I will review the patch a bit more, but I think\nit would be committable.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 8 Dec 2022 20:01:02 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Thu, Dec 8, 2022 at 8:01 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Dec 8, 2022 at 5:00 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Updated patch attached.\n>\n> I will review the patch a bit more, but I think\n> it would be committable.\n\nOne thing I noticed is this bit:\n\n -- Clean up\n-DROP TABLE batch_table, batch_cp_upd_test, batch_table_p0,\nbatch_table_p1 CASCADE;\n+DROP TABLE batch_table, batch_table_p0, batch_table_p1,\nbatch_cp_upd_test, cmdlog CASCADE;\n\nThis would be nitpicking, but this as-proposed will not remove remote\ntables created for foreign-table partitions of the partitioned table\n‘batch_cp_upd_test’. So I modified this a bit further to remove them\nas well. Also, I split this into two, for readability. Another thing\nis a typo in a test-case comment: s/a single INSERTs/a single INSERT/.\nI fixed it as well. Other than that, the patch looks good to me.\nAttached is an updated patch. If there are no objections, I will\ncommit the patch.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 14 Dec 2022 18:44:48 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Wed, Dec 14, 2022 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Dec 8, 2022 at 8:01 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Thu, Dec 8, 2022 at 5:00 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > Updated patch attached.\n> >\n> > I will review the patch a bit more, but I think\n> > it would be committable.\n>\n> One thing I noticed is this bit:\n>\n> -- Clean up\n> -DROP TABLE batch_table, batch_cp_upd_test, batch_table_p0,\n> batch_table_p1 CASCADE;\n> +DROP TABLE batch_table, batch_table_p0, batch_table_p1,\n> batch_cp_upd_test, cmdlog CASCADE;\n>\n> This would be nitpicking, but this as-proposed will not remove remote\n> tables created for foreign-table partitions of the partitioned table\n> ‘batch_cp_upd_test’. So I modified this a bit further to remove them\n> as well. Also, I split this into two, for readability. Another thing\n> is a typo in a test-case comment: s/a single INSERTs/a single INSERT/.\n> I fixed it as well. Other than that, the patch looks good to me.\n> Attached is an updated patch. If there are no objections, I will\n> commit the patch.\n\nThanks for the changes. LGTM.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Dec 2022 22:28:46 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "Hi Amit-san,\n\nOn Wed, Dec 14, 2022 at 10:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Dec 14, 2022 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > One thing I noticed is this bit:\n> >\n> > -- Clean up\n> > -DROP TABLE batch_table, batch_cp_upd_test, batch_table_p0,\n> > batch_table_p1 CASCADE;\n> > +DROP TABLE batch_table, batch_table_p0, batch_table_p1,\n> > batch_cp_upd_test, cmdlog CASCADE;\n> >\n> > This would be nitpicking, but this as-proposed will not remove remote\n> > tables created for foreign-table partitions of the partitioned table\n> > ‘batch_cp_upd_test’. So I modified this a bit further to remove them\n> > as well. Also, I split this into two, for readability. Another thing\n> > is a typo in a test-case comment: s/a single INSERTs/a single INSERT/.\n> > I fixed it as well. Other than that, the patch looks good to me.\n> > Attached is an updated patch. If there are no objections, I will\n> > commit the patch.\n>\n> LGTM.\n\nCool! Pushed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 20 Dec 2022 19:19:51 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow batched insert during cross-partition updates" }, { "msg_contents": "On Tue, Dec 20, 2022 at 7:18 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Wed, Dec 14, 2022 at 10:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Dec 14, 2022 at 6:45 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > One thing I noticed is this bit:\n> > >\n> > > -- Clean up\n> > > -DROP TABLE batch_table, batch_cp_upd_test, batch_table_p0,\n> > > batch_table_p1 CASCADE;\n> > > +DROP TABLE batch_table, batch_table_p0, batch_table_p1,\n> > > batch_cp_upd_test, cmdlog CASCADE;\n> > >\n> > > This would be nitpicking, but this as-proposed will not remove remote\n> > > tables created for foreign-table partitions of the partitioned table\n> > > ‘batch_cp_upd_test’. So I modified this a bit further to remove them\n> > > as well. Also, I split this into two, for readability. Another thing\n> > > is a typo in a test-case comment: s/a single INSERTs/a single INSERT/.\n> > > I fixed it as well. Other than that, the patch looks good to me.\n> > > Attached is an updated patch. If there are no objections, I will\n> > > commit the patch.\n> >\n> > LGTM.\n>\n> Cool! Pushed.\n\nThank you, Fujita-san.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Dec 2022 20:19:21 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow batched insert during cross-partition updates" } ]
[ { "msg_contents": "A few years ago we discussed whether to disable SSL compression [0] which ended\nup with it being off by default combined with a recommendation against it in\nthe docs.\n\nOpenSSL themselves disabled SSL compression by default in 2016 in 1.1.0 with\ndistros often having had it disabled for a long while before then. Further,\nTLSv1.3 removes compression entirely on the protocol level mandating that only\nNULL compression is allowed in the ClientHello. NSS, which is discussed in\nanother thread, removed SSL compression entirely in version 3.33 in 2017.\n\nIt seems about time to revisit this since it's unlikely to work anywhere but in\na very small subset of system setups (being disabled by default everywhere) and\nis thus likely to be very untested at best. There is also the security aspect\nwhich is less clear-cut for us compared to HTTP client/servers, but not refuted\n(the linked thread has a good discussion on this).\n\nThe attached removes sslcompression to see what it would look like. The server\nactively disallows it and the parameter is removed, but the sslcompression\ncolumn in the stat view is retained. An alternative could be to retain the\nparameter but not act on it in order to not break scripts etc, but that just\npostpones the pain until when we inevitably do remove it.\n\nThoughts? Any reason to keep supporting SSL compression or is it time for v14\nto remove it? Are there still users leveraging this for protocol compression\nwithout security making it worthwhile to keep?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://www.postgresql.org/message-id/flat/595cf3b1-4ffe-7f05-6f72-f72b7afa7993%402ndquadrant.com", "msg_date": "Thu, 18 Feb 2021 13:51:18 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Disallow SSL compression?" }, { "msg_contents": "On Thu, Feb 18, 2021 at 1:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> A few years ago we discussed whether to disable SSL compression [0] which ended\n> up with it being off by default combined with a recommendation against it in\n> the docs.\n>\n> OpenSSL themselves disabled SSL compression by default in 2016 in 1.1.0 with\n> distros often having had it disabled for a long while before then. Further,\n> TLSv1.3 removes compression entirely on the protocol level mandating that only\n> NULL compression is allowed in the ClientHello. NSS, which is discussed in\n> another thread, removed SSL compression entirely in version 3.33 in 2017.\n>\n> It seems about time to revisit this since it's unlikely to work anywhere but in\n> a very small subset of system setups (being disabled by default everywhere) and\n> is thus likely to be very untested at best. There is also the security aspect\n> which is less clear-cut for us compared to HTTP client/servers, but not refuted\n> (the linked thread has a good discussion on this).\n\nAgreed. It will also help with not having to implement it in new SSL\nimplementations I'm sure :)\n\n\n> The attached removes sslcompression to see what it would look like. The server\n> actively disallows it and the parameter is removed, but the sslcompression\n> column in the stat view is retained. An alternative could be to retain the\n> parameter but not act on it in order to not break scripts etc, but that just\n> postpones the pain until when we inevitably do remove it.\n>\n> Thoughts? Any reason to keep supporting SSL compression or is it time for v14\n> to remove it? Are there still users leveraging this for protocol compression\n> without security making it worthwhile to keep?\n\nWhen the last round of discussion happened, I had multiple customers\nwho did exactly that. None of them do that anymore, due to the pain of\nmaking it work...\n\nI think for libpq we want to keep the option for a while but making it\na no-op, to not unnecessarily break systems where people just upgrade\nlibpq, though. And document it as such having no effect and \"will\neventually be removed\".\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 22 Feb 2021 11:52:15 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 22 Feb 2021, at 11:52, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Thu, Feb 18, 2021 at 1:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>> A few years ago we discussed whether to disable SSL compression [0] which ended\n>> up with it being off by default combined with a recommendation against it in\n>> the docs.\n>> \n>> OpenSSL themselves disabled SSL compression by default in 2016 in 1.1.0 with\n>> distros often having had it disabled for a long while before then. Further,\n>> TLSv1.3 removes compression entirely on the protocol level mandating that only\n>> NULL compression is allowed in the ClientHello. NSS, which is discussed in\n>> another thread, removed SSL compression entirely in version 3.33 in 2017.\n>> \n>> It seems about time to revisit this since it's unlikely to work anywhere but in\n>> a very small subset of system setups (being disabled by default everywhere) and\n>> is thus likely to be very untested at best. There is also the security aspect\n>> which is less clear-cut for us compared to HTTP client/servers, but not refuted\n>> (the linked thread has a good discussion on this).\n> \n> Agreed. It will also help with not having to implement it in new SSL\n> implementations I'm sure :)\n\nNot really, no other TLS library I would consider using actually has\ncompression (except maybe wolfSSL?). GnuTLS and NSS both removed it, and\nSecure Transport and Schannel never had it AFAIK.\n\n>> The attached removes sslcompression to see what it would look like. The server\n>> actively disallows it and the parameter is removed, but the sslcompression\n>> column in the stat view is retained. An alternative could be to retain the\n>> parameter but not act on it in order to not break scripts etc, but that just\n>> postpones the pain until when we inevitably do remove it.\n>> \n>> Thoughts? Any reason to keep supporting SSL compression or is it time for v14\n>> to remove it? Are there still users leveraging this for protocol compression\n>> without security making it worthwhile to keep?\n> \n> When the last round of discussion happened, I had multiple customers\n> who did exactly that. None of them do that anymore, due to the pain of\n> making it work...\n\nUnsurprisingly.\n\n> I think for libpq we want to keep the option for a while but making it\n> a no-op, to not unnecessarily break systems where people just upgrade\n> libpq, though. And document it as such having no effect and \"will\n> eventually be removed\".\n\nAgreed, that's better.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Mon, 22 Feb 2021 12:27:38 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Mon, Feb 22, 2021 at 12:27:38PM +0100, Daniel Gustafsson wrote:\n> On 22 Feb 2021, at 11:52, Magnus Hagander <magnus@hagander.net> wrote:\n>> I think for libpq we want to keep the option for a while but making it\n>> a no-op, to not unnecessarily break systems where people just upgrade\n>> libpq, though. And document it as such having no effect and \"will\n>> eventually be removed\".\n> \n> Agreed, that's better.\n\n+1. There is just pain waiting ahead when breaking connection strings\nthat used to work previously. A \"while\" could take a long time\nthough, see the case of \"tty\" that's still around (cb7fb3c). Could\nyou update the patch to do that? This requires an update of\nfe-connect.c and libpq.sgml.\n--\nMichael", "msg_date": "Fri, 26 Feb 2021 11:34:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Mon, Feb 22, 2021 at 12:28 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 22 Feb 2021, at 11:52, Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Thu, Feb 18, 2021 at 1:51 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >> A few years ago we discussed whether to disable SSL compression [0] which ended\n> >> up with it being off by default combined with a recommendation against it in\n> >> the docs.\n> >>\n> >> OpenSSL themselves disabled SSL compression by default in 2016 in 1.1.0 with\n> >> distros often having had it disabled for a long while before then. Further,\n> >> TLSv1.3 removes compression entirely on the protocol level mandating that only\n> >> NULL compression is allowed in the ClientHello. NSS, which is discussed in\n> >> another thread, removed SSL compression entirely in version 3.33 in 2017.\n> >>\n> >> It seems about time to revisit this since it's unlikely to work anywhere but in\n> >> a very small subset of system setups (being disabled by default everywhere) and\n> >> is thus likely to be very untested at best. There is also the security aspect\n> >> which is less clear-cut for us compared to HTTP client/servers, but not refuted\n> >> (the linked thread has a good discussion on this).\n> >\n> > Agreed. It will also help with not having to implement it in new SSL\n> > implementations I'm sure :)\n>\n> Not really, no other TLS library I would consider using actually has\n> compression (except maybe wolfSSL?). GnuTLS and NSS both removed it, and\n> Secure Transport and Schannel never had it AFAIK.\n\nAh, well, you'd still have to implement some empty placeholder :)\n\n\n> >> The attached removes sslcompression to see what it would look like. The server\n> >> actively disallows it and the parameter is removed, but the sslcompression\n> >> column in the stat view is retained. An alternative could be to retain the\n> >> parameter but not act on it in order to not break scripts etc, but that just\n> >> postpones the pain until when we inevitably do remove it.\n> >>\n> >> Thoughts? Any reason to keep supporting SSL compression or is it time for v14\n> >> to remove it? Are there still users leveraging this for protocol compression\n> >> without security making it worthwhile to keep?\n> >\n> > When the last round of discussion happened, I had multiple customers\n> > who did exactly that. None of them do that anymore, due to the pain of\n> > making it work...\n>\n> Unsurprisingly.\n>\n> > I think for libpq we want to keep the option for a while but making it\n> > a no-op, to not unnecessarily break systems where people just upgrade\n> > libpq, though. And document it as such having no effect and \"will\n> > eventually be removed\".\n>\n> Agreed, that's better.\n\nIn fact, pg_basebackup with -R will generate a connection string that\nincludes sslcompression=0 when used today (unless you jump through the\nhoops to make it work), so not accepting that noe would definitely\nbreak a lot of things needlessly.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 26 Feb 2021 11:02:25 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 26 Feb 2021, at 11:02, Magnus Hagander <magnus@hagander.net> wrote:\n> On Mon, Feb 22, 2021 at 12:28 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 22 Feb 2021, at 11:52, Magnus Hagander <magnus@hagander.net> wrote:\n\n>>> Agreed. It will also help with not having to implement it in new SSL\n>>> implementations I'm sure :)\n>> \n>> Not really, no other TLS library I would consider using actually has\n>> compression (except maybe wolfSSL?). GnuTLS and NSS both removed it, and\n>> Secure Transport and Schannel never had it AFAIK.\n> \n> Ah, well, you'd still have to implement some empty placeholder :)\n\nCorrect.\n\n>>>> The attached removes sslcompression to see what it would look like. The server\n>>>> actively disallows it and the parameter is removed, but the sslcompression\n>>>> column in the stat view is retained. An alternative could be to retain the\n>>>> parameter but not act on it in order to not break scripts etc, but that just\n>>>> postpones the pain until when we inevitably do remove it.\n>>>> \n>>>> Thoughts? Any reason to keep supporting SSL compression or is it time for v14\n>>>> to remove it? Are there still users leveraging this for protocol compression\n>>>> without security making it worthwhile to keep?\n>>> \n>>> When the last round of discussion happened, I had multiple customers\n>>> who did exactly that. None of them do that anymore, due to the pain of\n>>> making it work...\n>> \n>> Unsurprisingly.\n>> \n>>> I think for libpq we want to keep the option for a while but making it\n>>> a no-op, to not unnecessarily break systems where people just upgrade\n>>> libpq, though. And document it as such having no effect and \"will\n>>> eventually be removed\".\n>> \n>> Agreed, that's better.\n> \n> In fact, pg_basebackup with -R will generate a connection string that\n> includes sslcompression=0 when used today (unless you jump through the\n> hoops to make it work), so not accepting that noe would definitely\n> break a lot of things needlessly.\n\nYup, and as mentioned elsewhere in the thread the standard way of doing it is\nto leave the param behind and just document it as not in use. Attached is a v2\nwhich retains the sslcompression parameter for backwards compatibility.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 26 Feb 2021 20:34:08 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 26 Feb 2021, at 03:34, Michael Paquier <michael@paquier.xyz> wrote:\n\n> There is just pain waiting ahead when breaking connection strings\n> that used to work previously. A \"while\" could take a long time\n> though, see the case of \"tty\" that's still around (cb7fb3c).\n\nI see your tty removal from 2003, and raise you one \"authtype\" which was axed\non January 26 1998 in commit d5bbe2aca55bc833e38c768d7f82, but which is still\naround. More on that in a separate thread though.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n", "msg_date": "Fri, 26 Feb 2021 21:02:00 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 26 Feb 2021, at 20:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Attached is a v2 which retains the sslcompression parameter for backwards\n> compatibility.\n\n\nAnd now a v3 which fixes an oversight in postgres_fdw as well as adds an SSL\nTAP test to cover deprecated parameters.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 3 Mar 2021 11:31:22 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On 03.03.21 11:31, Daniel Gustafsson wrote:\n>> On 26 Feb 2021, at 20:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> Attached is a v2 which retains the sslcompression parameter for backwards\n>> compatibility.\n> \n> \n> And now a v3 which fixes an oversight in postgres_fdw as well as adds an SSL\n> TAP test to cover deprecated parameters.\n\nPer your other thread, you should also remove the environment variable.\n\nIn postgres_fdw, I think commenting it out is not the right change. The \nother commented out values are still valid settings but are omitted from \nthe test for other reasons. It's not entirely all clear, but we don't \nhave to keep obsolete stuff in there forever.\n\n\n", "msg_date": "Wed, 3 Mar 2021 15:14:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Wed, Mar 03, 2021 at 03:14:01PM +0100, Peter Eisentraut wrote:\n> Per your other thread, you should also remove the environment variable.\n> \n> In postgres_fdw, I think commenting it out is not the right change. The\n> other commented out values are still valid settings but are omitted from the\n> test for other reasons. It's not entirely all clear, but we don't have to\n> keep obsolete stuff in there forever.\n\nAgreed on both points. Also, it seems a bit weird to keep around\npg_stat_ssl.compression while we know that it will always be false.\nWouldn't it be better to get rid of that in PgBackendSSLStatus and\npg_stat_ssl then?\n--\nMichael", "msg_date": "Thu, 4 Mar 2021 19:59:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 3 Mar 2021, at 15:14, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 03.03.21 11:31, Daniel Gustafsson wrote:\n>>> On 26 Feb 2021, at 20:34, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Attached is a v2 which retains the sslcompression parameter for backwards\n>>> compatibility.\n>> And now a v3 which fixes an oversight in postgres_fdw as well as adds an SSL\n>> TAP test to cover deprecated parameters.\n> \n> Per your other thread, you should also remove the environment variable.\n\nFixed.\n\n> In postgres_fdw, I think commenting it out is not the right change. The other commented out values are still valid settings but are omitted from the test for other reasons. It's not entirely all clear, but we don't have to keep obsolete stuff in there forever.\n\nAh, I didn't get that distinction but that makes sense. Fixed.\n\nThe attached version takes a step further and removes sslcompression from\npg_conn and just eats the value as there is no use in setting a dummy alue. It\nalso removes compression from PgBackendSSLStatus and be_tls_get_compression as\nraised by Michael downthread. I opted for keeping the column in pg_stat_ssl\nwith a note in the documentation that it will be removed, for the same\nbackwards compatibility reason of eating the connection param without acting on\nit. This might be overthinking it however.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Thu, 4 Mar 2021 23:52:56 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 4 Mar 2021, at 11:59, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Mar 03, 2021 at 03:14:01PM +0100, Peter Eisentraut wrote:\n>> Per your other thread, you should also remove the environment variable.\n>> \n>> In postgres_fdw, I think commenting it out is not the right change. The\n>> other commented out values are still valid settings but are omitted from the\n>> test for other reasons. It's not entirely all clear, but we don't have to\n>> keep obsolete stuff in there forever.\n> \n> Agreed on both points. Also, it seems a bit weird to keep around\n> pg_stat_ssl.compression while we know that it will always be false.\n> Wouldn't it be better to get rid of that in PgBackendSSLStatus and\n> pg_stat_ssl then?\n\nFixed in the v4 posted just now, although I opted for keeping the column in\npg_stat_ssl for backwards compatibility with a doc note.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Mar 2021 23:54:17 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Thu, Mar 04, 2021 at 11:52:56PM +0100, Daniel Gustafsson wrote:\n> The attached version takes a step further and removes sslcompression from\n> pg_conn and just eats the value as there is no use in setting a dummy alue. It\n> also removes compression from PgBackendSSLStatus and be_tls_get_compression as\n> raised by Michael downthread. I opted for keeping the column in pg_stat_ssl\n> with a note in the documentation that it will be removed, for the same\n> backwards compatibility reason of eating the connection param without acting on\n> it. This might be overthinking it however.\n\nFWIW, I would vote to nuke it from all those places, reducing a bit\npg_stat_get_activity() while on it. Keeping it around in the system\ncatalogs may cause confusion IMHO, by making people think that it is\nstill possible to get into configurations where sslcompression could\nbe really enabled. The rest of the patch looks fine to me.\n--\nMichael", "msg_date": "Fri, 5 Mar 2021 16:04:20 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "> On 5 Mar 2021, at 08:04, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Mar 04, 2021 at 11:52:56PM +0100, Daniel Gustafsson wrote:\n>> The attached version takes a step further and removes sslcompression from\n>> pg_conn and just eats the value as there is no use in setting a dummy alue. It\n>> also removes compression from PgBackendSSLStatus and be_tls_get_compression as\n>> raised by Michael downthread. I opted for keeping the column in pg_stat_ssl\n>> with a note in the documentation that it will be removed, for the same\n>> backwards compatibility reason of eating the connection param without acting on\n>> it. This might be overthinking it however.\n> \n> FWIW, I would vote to nuke it from all those places, reducing a bit\n> pg_stat_get_activity() while on it. Keeping it around in the system\n> catalogs may cause confusion IMHO, by making people think that it is\n> still possible to get into configurations where sslcompression could\n> be really enabled. The rest of the patch looks fine to me.\n\nAttached is a version which removes that as well. I left the compression\nkeyword in PQsslAttribute on purpose, not really for backwards compatibility\n(PQsslAttributeNames takes care of that) but rather since it's a more generic\nconnection-info function.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 5 Mar 2021 13:21:01 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Fri, Mar 05, 2021 at 01:21:01PM +0100, Daniel Gustafsson wrote:\n> On 5 Mar 2021, at 08:04, Michael Paquier <michael@paquier.xyz> wrote:\n>> FWIW, I would vote to nuke it from all those places, reducing a bit\n>> pg_stat_get_activity() while on it. Keeping it around in the system\n>> catalogs may cause confusion IMHO, by making people think that it is\n>> still possible to get into configurations where sslcompression could\n>> be really enabled. The rest of the patch looks fine to me.\n> \n> Attached is a version which removes that as well.\n\nPeter, Magnus, any comments about this point?\n\n> I left the compression\n> keyword in PQsslAttribute on purpose, not really for backwards compatibility\n> (PQsslAttributeNames takes care of that) but rather since it's a more generic\n> connection-info function.\n\nMakes sense.\n--\nMichael", "msg_date": "Fri, 5 Mar 2021 21:37:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Fri, Mar 5, 2021 at 1:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 05, 2021 at 01:21:01PM +0100, Daniel Gustafsson wrote:\n> > On 5 Mar 2021, at 08:04, Michael Paquier <michael@paquier.xyz> wrote:\n> >> FWIW, I would vote to nuke it from all those places, reducing a bit\n> >> pg_stat_get_activity() while on it. Keeping it around in the system\n> >> catalogs may cause confusion IMHO, by making people think that it is\n> >> still possible to get into configurations where sslcompression could\n> >> be really enabled. The rest of the patch looks fine to me.\n> >\n> > Attached is a version which removes that as well.\n>\n> Peter, Magnus, any comments about this point?\n\nWe've broken stats views before. While it'd be nice if we could group\nmultiple breakages at the same time, I don't think it's that\nimportant. Better to get rid of it once and for all from as many\nplaces as possible.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 5 Mar 2021 17:44:20 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Fri, Mar 05, 2021 at 05:44:20PM +0100, Magnus Hagander wrote:\n> We've broken stats views before. While it'd be nice if we could group\n> multiple breakages at the same time, I don't think it's that\n> important. Better to get rid of it once and for all from as many\n> places as possible.\n\nOkay, cool. I'd rather wait more for Peter before doing anything, so\nif there are no objections, I'll look at that stuff again at the\nbeginning of next week and perhaps apply it. If you wish to take care\nof that yourself, please feel free to do so, of course.\n--\nMichael", "msg_date": "Sat, 6 Mar 2021 10:39:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" }, { "msg_contents": "On Sat, Mar 06, 2021 at 10:39:52AM +0900, Michael Paquier wrote:\n> Okay, cool. I'd rather wait more for Peter before doing anything, so\n> if there are no objections, I'll look at that stuff again at the\n> beginning of next week and perhaps apply it. If you wish to take care\n> of that yourself, please feel free to do so, of course.\n\nSo, I have looked at the proposed patch in details, fixed the\ndocumentation of pg_stat_ssl where compression was still listed,\nchecked a couple of things with and without OpenSSL, across past major\nPG versions with OpenSSL 1.0.2 to see if compression was getting\ndisabled correctly. And things look all good, so applied.\n--\nMichael", "msg_date": "Tue, 9 Mar 2021 11:19:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Disallow SSL compression?" } ]
[ { "msg_contents": "\nI think our documentation is mistaken about what it means for a cursor\nto be \"sensitive\" or \"insensitive\".\n\nThe definition in SQL:2016 is:\n\n A change to SQL-data is said to be independent of a cursor CR if and\n only if it is not made by an <update statement: positioned> or a\n <delete statement: positioned> that is positioned on CR.\n\n A change to SQL-data is said to be significant to CR if and only\n if it is independent of CR, and, had it been committed before CR\n was opened, would have caused the sequence of rows in the result\n set descriptor of CR to be different in any respect.\n\n ...\n\n If a cursor is open, and the SQL-transaction in which the cursor\n was opened makes a significant change to SQL-data, then whether\n that change is visible through that cursor before it is closed is\n determined as follows:\n\n - If the cursor is insensitive, then significant changes are not\n visible.\n - If the cursor is sensitive, then significant changes are\n visible.\n - If the cursor is asensitive, then the visibility of significant\n changes is implementation-dependent.\n\nSo I think a test case would be:\n\ncreate table t1 (a int);\ninsert into t1 values (1);\nbegin;\ndeclare c1 cursor for select * from t1;\ninsert into t1 values (2);\nfetch next from c1; -- returns 1\nfetch next from c1; -- ???\ncommit;\n\nWith a sensitive cursor, the second fetch would return 2, with an\ninsensitive cursor, the second fetch returns nothing. The latter\nhappens with PostgreSQL.\n\nThe DECLARE man page describes it thus:\n\n INSENSITIVE\n Indicates that data retrieved from the cursor should be\n unaffected by updates to the table(s) underlying the cursor\n that occur after the cursor is created. In PostgreSQL, this is\n the default behavior; so this key word has no effect and is\n only accepted for compatibility with the SQL standard.\n\nWhich is not wrong, but it omits that this is only relevant for\nchanges in the same transaction.\n\nLater in the DECLARE man page, it says:\n\n If the cursor's query includes FOR UPDATE or FOR SHARE, then\n returned rows are locked at the time they are first fetched, in\n the same way as for a regular SELECT command with these\n options. In addition, the returned rows will be the most\n up-to-date versions; therefore these options provide the\n equivalent of what the SQL standard calls a \"sensitive\n cursor\".\n\nAnd that seems definitely wrong. Declaring c1 in the above example as\nFOR UPDATE or FOR SHARE does not change the result. I think this\ndiscussion is mixing up the concept of cursor sensitivity with\ntransaction isolation.\n\nThoughts?\n\n\n", "msg_date": "Thu, 18 Feb 2021 17:00:28 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "cursor sensitivity misunderstanding" }, { "msg_contents": "On Thu, Feb 18, 2021 at 9:00 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> And that seems definitely wrong. Declaring c1 in the above example as\n> FOR UPDATE or FOR SHARE does not change the result. I think this\n> discussion is mixing up the concept of cursor sensitivity with\n> transaction isolation.\n>\n> Thoughts?\n>\n>\nThis came up on Discord in the context of pl/pgsql last month - never\nreally came to a conclusion.\n\n\"\nopen curs FOR SELECT * FROM Res FOR UPDATE;\n LOOP\n FETCH curs into record;\n EXIT WHEN NOT FOUND;\n INSERT INTO Res SELECT Type.Name\n FROM Type\n WHERE Type.SupClass = record.Name;\n END LOOP;\n\"\n\nThe posted question was: \"this doesn't go over rows added during the loop\ndespite the FOR UPDATE\"\n\nThe OP was doing a course based on Oracle and was confused regarding our\nbehavior. The documentation failed to help me provide a useful response,\nso I'd agree there is something here that needs reworking if not outright\nfixing.\n\nDavid J.\n\nOn Thu, Feb 18, 2021 at 9:00 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nAnd that seems definitely wrong.  Declaring c1 in the above example as\nFOR UPDATE or FOR SHARE does not change the result.  I think this\ndiscussion is mixing up the concept of cursor sensitivity with\ntransaction isolation.\n\nThoughts?This came up on Discord in the context of pl/pgsql last month - never really came to a conclusion.\"open curs FOR SELECT * FROM Res FOR UPDATE;    LOOP        FETCH curs into record;        EXIT WHEN NOT FOUND;        INSERT INTO Res SELECT Type.Name                        FROM Type                        WHERE Type.SupClass = record.Name;    END LOOP;\"The posted question was: \"this doesn't go over rows added during the loop despite the FOR UPDATE\"The OP was doing a course based on Oracle and was confused regarding our behavior.  The documentation failed to help me provide a useful response, so I'd agree there is something here that needs reworking if not outright fixing.David J.", "msg_date": "Thu, 18 Feb 2021 09:11:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cursor sensitivity misunderstanding" }, { "msg_contents": "\nOn 18.02.21 17:11, David G. Johnston wrote:\n> The OP was doing a course based on Oracle and was confused regarding our \n> behavior.  The documentation failed to help me provide a useful \n> response, so I'd agree there is something here that needs reworking if \n> not outright fixing.\n\nAccording to the piece of the standard that I posted, the sensitivity \nbehavior here is implementation-dependent (not even -defined), so both \nimplementations are correct.\n\nBut the poster was apparently also confused by the same piece of \ndocumentation.\n\nIf you consider the implementation of MVCC in PostgreSQL, then the \ncurrent behavior makes sense. I suspect that this consideration was \nmuch more interesting for older system with locking-based concurrency \nand where \"read uncommitted\" was a real thing. With the current system, \ninsensitive cursors are essentially free and sensitive cursors would \nrequire quite a bit of effort to implement.\n\n\n", "msg_date": "Thu, 18 Feb 2021 19:14:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: cursor sensitivity misunderstanding" }, { "msg_contents": "On 18.02.21 19:14, Peter Eisentraut wrote:\n> On 18.02.21 17:11, David G. Johnston wrote:\n>> The OP was doing a course based on Oracle and was confused regarding \n>> our behavior.  The documentation failed to help me provide a useful \n>> response, so I'd agree there is something here that needs reworking if \n>> not outright fixing.\n> \n> According to the piece of the standard that I posted, the sensitivity \n> behavior here is implementation-dependent (not even -defined), so both \n> implementations are correct.\n> \n> But the poster was apparently also confused by the same piece of \n> documentation.\n\nI came up with the attached patch to sort this out a bit. It does not \nchange any cursor behavior. But the documentation now uses the terms \nmore correctly and explains the differences between SQL and the \nPostgreSQL implementation better, I think.", "msg_date": "Thu, 25 Feb 2021 16:37:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: cursor sensitivity misunderstanding" }, { "msg_contents": "On Thu, Feb 25, 2021 at 8:37 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 18.02.21 19:14, Peter Eisentraut wrote:\n> > On 18.02.21 17:11, David G. Johnston wrote:\n> >> The OP was doing a course based on Oracle and was confused regarding\n> >> our behavior. The documentation failed to help me provide a useful\n> >> response, so I'd agree there is something here that needs reworking if\n> >> not outright fixing.\n> >\n> > According to the piece of the standard that I posted, the sensitivity\n> > behavior here is implementation-dependent (not even -defined), so both\n> > implementations are correct.\n> >\n> > But the poster was apparently also confused by the same piece of\n> > documentation.\n>\n> I came up with the attached patch to sort this out a bit. It does not\n> change any cursor behavior. But the documentation now uses the terms\n> more correctly and explains the differences between SQL and the\n> PostgreSQL implementation better, I think.\n>\n\nthanks!, though this seems like the wrong approach. Simply noting that our\ncursor is not standard compliant (or at least we don't implement a\nstandard-compliant sensitive cursor) should suffice. I don't really get\nthe point of adding ASENSITIVE if we don't have SENSITIVE too. I'm also\nunfamiliar with the standard default behaviors to comment on where we\ndiffer there - but that should be easy enough to address.\n\nI would suggest limiting the doc change to pointing out that we do allow\nfor a standard-compliant INSENSITIVE behaving cursor - one that precludes\nlocal sensitively via the FOR SHARE and FOR UPDATE clauses - by adding that\nkeyword. Otherwise, while the cursor is still (and always) insensitive\nglobally the cursor can become locally sensitive implicitly by including a\nFOR UPDATE or FOR SHARE clause in the query. Then maybe consider improving\nthe notes section through subtraction once a more clear initial\npresentation has been made to the reader.\n\nDavid J.\n\nOn Thu, Feb 25, 2021 at 8:37 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 18.02.21 19:14, Peter Eisentraut wrote:\n> On 18.02.21 17:11, David G. Johnston wrote:\n>> The OP was doing a course based on Oracle and was confused regarding \n>> our behavior.  The documentation failed to help me provide a useful \n>> response, so I'd agree there is something here that needs reworking if \n>> not outright fixing.\n> \n> According to the piece of the standard that I posted, the sensitivity \n> behavior here is implementation-dependent (not even -defined), so both \n> implementations are correct.\n> \n> But the poster was apparently also confused by the same piece of \n> documentation.\n\nI came up with the attached patch to sort this out a bit.  It does not \nchange any cursor behavior.  But the documentation now uses the terms \nmore correctly and explains the differences between SQL and the \nPostgreSQL implementation better, I think.thanks!, though this seems like the wrong approach.  Simply noting that our cursor is not standard compliant (or at least we don't implement a standard-compliant sensitive cursor) should suffice.  I don't really get the point of adding ASENSITIVE if we don't have SENSITIVE too.  I'm also unfamiliar with the standard default behaviors to comment on where we differ there - but that should be easy enough to address.I would suggest limiting the doc change to pointing out that we do allow for a standard-compliant INSENSITIVE behaving cursor - one that precludes local sensitively via the FOR SHARE and FOR UPDATE clauses - by adding that keyword.  Otherwise, while the cursor is still (and always) insensitive globally the cursor can become locally sensitive implicitly by including a FOR UPDATE or FOR SHARE clause in the query.  Then maybe consider improving the notes section through subtraction once a more clear initial presentation has been made to the reader.David J.", "msg_date": "Mon, 8 Mar 2021 16:22:31 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cursor sensitivity misunderstanding" }, { "msg_contents": "On 09.03.21 00:22, David G. Johnston wrote:\n> I came up with the attached patch to sort this out a bit.  It does not\n> change any cursor behavior.  But the documentation now uses the terms\n> more correctly and explains the differences between SQL and the\n> PostgreSQL implementation better, I think.\n> \n> \n> thanks!, though this seems like the wrong approach.  Simply noting that \n> our cursor is not standard compliant (or at least we don't implement a \n> standard-compliant sensitive cursor) should suffice.\n\nWell, we could just say, our behavior wrong/different. But I think it's \nactually right, we were just looking at an incorrect premise and making \nadditional claims about it that are not accurate.\n\n> I don't really get \n> the point of adding ASENSITIVE if we don't have SENSITIVE too.  I'm also \n> unfamiliar with the standard default behaviors to comment on where we \n> differ there - but that should be easy enough to address.\n\nASENSITIVE is merely a keyword to select the default behavior. Other \nSQL implementations also have it, so it seems sensible to add it while \nwe're polishing this.\n\n\n", "msg_date": "Thu, 11 Mar 2021 23:02:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: cursor sensitivity misunderstanding" } ]
[ { "msg_contents": "Hi,\n\nI’d like to use PG as an analytics engine on multiple separately created\nread-only datasets. The datasets, in my case, are independently created by\nbringing up a local PG instance, populating a table, and shutting down the\nPG instance. I do have control over both the creation and the query\nprocesses.\n\nAfter some research, I realize this is not a standard use-case, so I’m open\nto development work in the internals as required. I’d appreciate some\nadvice/guidance on possible approaches.\n\n 1. For dealing with independently created tables, I’m planning to have\n them organized in tablespaces during creation, let’s say one per table. In\n the PG instance where querying is performed, I can use symlinks to the\n relevant tablespaces from the data directory, and modify system catalogs\n appropriately for PG to be able to work with those tables.\n\nIs this a viable approach?\n\nAre there better alternatives?\n\nAny pitfalls I should expect?\n\n\n\n 1. From my research so far, it seems PG wouldn’t work out-of-the-box if\n those tablespaces are read-only. (Even assuming various services like\n vacuum, etc. are all turned off.)\n\nIn some documentation/slides, it’s mentioned that even in case of only the\nselect queries PG might write some metadata into the table’s pages (hints,\nor something like that.)\n\nIs this correct? Or is there a way to prevent all writes to the tables\nwithout modifying PG code?\n\nFor my education, where can I find some info on those “hints”? (Would also\nappreciate pointers to code.)\n\nIf it’s not possible to work with readonly tables without code changes,\nwhat would be the possible approaches for development?\n\nWould appreciate any advice/guidance/recommendations.\n\nThanks in advance!\n\nHi,\nI’d like to\nuse PG as an analytics engine on multiple separately created read-only\ndatasets. The datasets, in my case, are independently created by bringing up a\nlocal PG instance, populating a table, and shutting down the PG instance. I do\nhave control over both the creation and the query processes.\nAfter some\nresearch, I realize this is not a standard use-case, so I’m open to development\nwork in the internals as required. I’d appreciate some advice/guidance on\npossible approaches.\n\nFor dealing with independently\n created tables, I’m planning to have them organized in tablespaces during\n creation, let’s say one per table. In the PG instance where querying is\n performed, I can use symlinks to the relevant tablespaces from the data\n directory, and modify system catalogs appropriately for PG to be able to\n work with those tables.\n\nIs this a viable approach? \nAre there better alternatives? \nAny pitfalls I should expect?\n \n\nFrom my research so far, it\n seems PG wouldn’t work out-of-the-box if those tablespaces are read-only.\n (Even assuming various services like vacuum, etc. are all turned off.)\n\nIn some documentation/slides, it’s mentioned that even in case of only\nthe select queries PG might write some metadata into the table’s pages (hints,\nor something like that.)\nIs this correct? Or is there a way to prevent all writes to the tables\nwithout modifying PG code?\nFor my education, where can I find some info on those “hints”? (Would\nalso appreciate pointers to code.)\nIf it’s not possible to work with readonly tables without code changes,\nwhat would be the possible approaches for development? \nWould\nappreciate any advice/guidance/recommendations.\nThanks in\nadvance!", "msg_date": "Thu, 18 Feb 2021 21:13:34 +0200", "msg_from": "Amichai Amar <amichai.amar@gmail.com>", "msg_from_op": true, "msg_subject": "many-to-many problem" } ]
[ { "msg_contents": "When I run \"autoreconf\" on the master branch, git generates the diff\nbelow. Shouldn't it just be applied? I suppose someone changed configure.ac\nand forgot to update the generated file.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Fri, 19 Feb 2021 07:40:07 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "pg_config_h.in not up-to-date" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> writes:\n> When I run \"autoreconf\" on the master branch, git generates the diff\n> below. Shouldn't it just be applied? I suppose someone changed configure.ac\n> and forgot to update the generated file.\n\nYeah, looks like fe61df7f8 is at fault. Michael?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Feb 2021 01:42:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config_h.in not up-to-date" }, { "msg_contents": "On Fri, Feb 19, 2021 at 01:42:38AM -0500, Tom Lane wrote:\n> Antonin Houska <ah@cybertec.at> writes:\n>> When I run \"autoreconf\" on the master branch, git generates the diff\n>> below. Shouldn't it just be applied? I suppose someone changed configure.ac\n>> and forgot to update the generated file.\n> \n> Yeah, looks like fe61df7f8 is at fault. Michael?\n\nIndeed, thanks. It looks like a \"git add\" that was fat-fingered. I\nwould like to make things more consistent with the attached.\nThoughts?\n--\nMichael", "msg_date": "Fri, 19 Feb 2021 16:13:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_config_h.in not up-to-date" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Indeed, thanks. It looks like a \"git add\" that was fat-fingered. I\n> would like to make things more consistent with the attached.\n\n+1, but I think the first period in this comment is redundant:\n\n+ AC_DEFINE([USE_OPENSSL], 1, [Define to 1 to build with OpenSSL support. (--with-ssl=openssl).])\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Feb 2021 02:21:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config_h.in not up-to-date" }, { "msg_contents": "On Fri, Feb 19, 2021 at 02:21:21AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > Indeed, thanks. It looks like a \"git add\" that was fat-fingered. I\n> > would like to make things more consistent with the attached.\n> \n> +1, but I think the first period in this comment is redundant:\n> \n> + AC_DEFINE([USE_OPENSSL], 1, [Define to 1 to build with OpenSSL support. (--with-ssl=openssl).])\n\nI guess that you mean the second period here to be more consistent\nwith the others? That would mean the following diff:\n+ AC_DEFINE([USE_OPENSSL], 1, [Define to 1 to build with OpenSSL support. (--with-ssl=openssl)])\n--\nMichael", "msg_date": "Fri, 19 Feb 2021 16:34:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_config_h.in not up-to-date" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Feb 19, 2021 at 02:21:21AM -0500, Tom Lane wrote:\n>> +1, but I think the first period in this comment is redundant:\n>> + AC_DEFINE([USE_OPENSSL], 1, [Define to 1 to build with OpenSSL support. (--with-ssl=openssl).])\n\n> I guess that you mean the second period here to be more consistent\n> with the others? That would mean the following diff:\n> + AC_DEFINE([USE_OPENSSL], 1, [Define to 1 to build with OpenSSL support. (--with-ssl=openssl)])\n\nHm. It should be consistent with the rest, for sure. Personally I'd put\nthe only period at the end, but I suppose we should stick with the\nprevailing style if there is one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Feb 2021 09:57:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config_h.in not up-to-date" }, { "msg_contents": "On Fri, Feb 19, 2021 at 09:57:22AM -0500, Tom Lane wrote:\n> Hm. It should be consistent with the rest, for sure. Personally I'd put\n> the only period at the end, but I suppose we should stick with the\n> prevailing style if there is one.\n\nThanks. I have just used the same style as XML, LDAP and LLVM then.\nThanks Antonin for the report.\n--\nMichael", "msg_date": "Sat, 20 Feb 2021 10:20:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_config_h.in not up-to-date" } ]
[ { "msg_contents": "Hi,\n\nattached is a patch that I think is cleaning up the API between Postgres \nand the logical decoding plugin. Up until now, not only transactions \nrolled back, but also some committed transactions were filtered and not \npresented to the output plugin. While it is documented that aborted \ntransactions are not decoded, the second exception has not been documented.\n\nThe difference is with committed empty transactions that have a snapshot \nversus those that do not. I think that's arbitrary and propose to \nremove this distinction, so that all committed transactions are decoded.\n\nIn the case of decoding a two-phase transaction, I argue that this is \neven more important, as the gid potentially carries information.\n\nPlease consider the attached patch, which drops the mentioned filter. \nIt also adjusts tests to show the difference and provides a minor \nclarification to the documentation.\n\nRegards\n\nMarkus", "msg_date": "Fri, 19 Feb 2021 13:36:25 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "On Fri, Feb 19, 2021 at 6:06 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> attached is a patch that I think is cleaning up the API between Postgres\n> and the logical decoding plugin. Up until now, not only transactions\n> rolled back, but also some committed transactions were filtered and not\n> presented to the output plugin. While it is documented that aborted\n> transactions are not decoded, the second exception has not been documented.\n>\n> The difference is with committed empty transactions that have a snapshot\n> versus those that do not. I think that's arbitrary and propose to\n> remove this distinction, so that all committed transactions are decoded.\n>\n\nWhat exactly is the use case to send empty transactions with or\nwithout prepared? In the past, there was a complaint [1] that such\ntransactions increase the network traffic.\n\n[1] - https://www.postgresql.org/message-id/CAMkU%3D1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Feb 2021 16:45:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "On 20.02.21 12:15, Amit Kapila wrote:\n> What exactly is the use case to send empty transactions with or\n> without prepared?\n\nI'm not saying that output plugins should *send* empty transactions to \nthe replica. I rather agree that this indeed is not wanted in most cases.\n\nHowever, that's not what the patch changes. It just moves the decision \nto the output plugin, giving it more flexibility. And possibly allowing \nit to still take action. For example, in case of a distributed \ntwo-phase commit scenario, where the publisher waits after its local \nPREPARE for replicas to also PREPARE. If such a prepare doesn't even \nget to the output plugin, that won't work. Not even thinking of a \nPREPARE on one node followed by a COMMIT PREPARED from a different node. \n It simply is not the business of the decoder to decide what to do with \nempty transactions.\n\nPlus, given the decoder does not manage to reliably filter all empty \ntransactions, an output plugin might want to implement its own \nfiltering, anyway (point in case: contrib/test_decoding and its \n'skip_empty_xacts' option - that actually kind of implies it would be \npossible to not skip them - as does the documentation). So I'm rather \nwondering: what's the use case of filtering some, but not all empty \ntransactions (on the decoder side)?\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Sat, 20 Feb 2021 13:48:49 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "Hi,\n\nOn 2021-02-20 13:48:49 +0100, Markus Wanner wrote:\n> However, that's not what the patch changes. It just moves the decision to\n> the output plugin, giving it more flexibility. And possibly allowing it to\n> still take action.\n\nIt's not free though - there's plenty workloads where there's an xid but\nno other WAL records for transactions. Threading those through the\noutput plugin does increase the runtime cost. And because such\ntransactions will typically not incur a high cost on the primary\n(e.g. in case of unlogged tables, there'll be a commit record, but often\nthe transaction will not wait for the commit record to be flushed to\ndisk), increasing the replication overhead isn't great.\n\n\n> For example, in case of a distributed two-phase commit\n> scenario, where the publisher waits after its local PREPARE for replicas to\n> also PREPARE.\n\nWhy is that ever interesting to do in the case of empty transactions?\nDue to the cost of doing remote PREPAREs ISTM you'd always want to\nimplement the optimization of not doing so for empty transactions.\n\n\n> So I'm rather wondering: what's the use case of filtering some, but\n> not all empty transactions (on the decoder side)?\n\nI'm wondering the opposite: What's a potential use case for handing\n\"trivially empty\" transactions to the output plugin that's worth\nincurring some cost for everyone?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Feb 2021 12:08:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "On 20.02.21 21:08, Andres Freund wrote:\n> It's not free though\n\nAgreed. It's an additional call to a callback. Do you think that's \nacceptable if limited to two-phase transactions only?\n\n> I'm wondering the opposite: What's a potential use case for handing\n> \"trivially empty\" transactions to the output plugin that's worth\n> incurring some cost for everyone?\n\nOutlined in my previous mail: prepare the transaction on one node, \ncommit it on another one. The PREPARE of a transaction is an event a \nuser may well want to have replicated, without having to worry about \nwhether or not the transaction happens to be empty.\n\n[ Imagine: ERROR: transaction cannot be replicated because it's empty.\n HINT: add a dummy UPDATE so that Postgres always has\n something to replicate, whatever else your app does\n or does not do in the transaction. ]\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Sat, 20 Feb 2021 21:44:30 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "Hi,\n\nOn 2021-02-20 21:44:30 +0100, Markus Wanner wrote:\n> On 20.02.21 21:08, Andres Freund wrote:\n> > It's not free though\n> \n> Agreed. It's an additional call to a callback.\n\nIf it were just a single indirection function call I'd not be\nbothered. But we need to do a fair bit mroe than that\n(c.f. ReorderBufferProcessTXN()).\n\n\n> Do you think that's acceptable if limited to two-phase transactions\n> only?\n\nCost-wise, yes - a 2pc prepare/commit is expensive enough that\ncomparatively the replay cost is unlikely to be relevant. Behaviourally\nI'm still not convinced it's useful.\n\n\n> > I'm wondering the opposite: What's a potential use case for handing\n> > \"trivially empty\" transactions to the output plugin that's worth\n> > incurring some cost for everyone?\n> \n> Outlined in my previous mail: prepare the transaction on one node, commit it\n> on another one. The PREPARE of a transaction is an event a user may well\n> want to have replicated, without having to worry about whether or not the\n> transaction happens to be empty.\n\nI read the previous mails in this thread, and I don't really see an\nexplanation for why this is something actually useful. When is a\ntransaction without actual contents interesting to replicate? I don't\nfind the \"gid potentially carries information\" particularly convincing.\n\n\n> [ Imagine: ERROR: transaction cannot be replicated because it's empty.\n> HINT: add a dummy UPDATE so that Postgres always has\n> something to replicate, whatever else your app does\n> or does not do in the transaction. ]\n\nMeh.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Feb 2021 18:04:16 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "On 21.02.21 03:04, Andres Freund wrote:\n> Cost-wise, yes - a 2pc prepare/commit is expensive enough that\n> comparatively the replay cost is unlikely to be relevant.\n\nGood. I attached an updated patch eliminating only the filtering for \nempty two-phase transactions.\n\n> Behaviourally I'm still not convinced it's useful.\n\nI don't have any further argument than: If you're promising to replicate \ntwo phases, I expect the first phase to be replicated individually.\n\nA database state with a transaction prepared and identified by \n'woohoo-roll-me-back-if-you-can' is not the same as a state without it. \n Even if the transaction is empty, or if you're actually going to roll \nit back. And therefore possibly ending up at the very same state without \nany useful effect.\n\nRegards\n\nMarkus", "msg_date": "Sun, 21 Feb 2021 11:05:34 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" }, { "msg_contents": "\n\nOn 2/21/21 11:05 AM, Markus Wanner wrote:\n> On 21.02.21 03:04, Andres Freund wrote:\n>> Cost-wise, yes - a 2pc prepare/commit is expensive enough that\n>> comparatively the replay cost is unlikely to be relevant.\n> \n> Good.  I attached an updated patch eliminating only the filtering for \n> empty two-phase transactions.\n> \n>> Behaviourally I'm still not convinced it's useful.\n> \n> I don't have any further argument than: If you're promising to replicate \n> two phases, I expect the first phase to be replicated individually.\n> \n> A database state with a transaction prepared and identified by \n> 'woohoo-roll-me-back-if-you-can' is not the same as a state without it. \n>  Even if the transaction is empty, or if you're actually going to roll \n> it back. And therefore possibly ending up at the very same state without \n> any useful effect.\n> \n\nIMHO it's quite weird to handle the 2PC and non-2PC cases differently.\n\nIf the argument is that this is expensive, it'd be good to quantify \nthat, somehow. If there's a workload with significant fraction of such \nempty transactions, does that mean +1% CPU usage, +10% or more?\n\nWhy not to make this configurable, i.e. the output plugin might indicate \nwhether it's interested in empty transactions or not. If not, we can do \nwhat we do now. Otherwise the empty transactions would be passed to the \noutput plugin.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 21 Feb 2021 22:56:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Present all committed transaction to the output plugin" } ]
[ { "msg_contents": "Suppose we have partitioned table defined as below:\n\nP(a, b, c, d) partition by (a)\n p1 (a=1)\n p2 (a=2)\n p3 (a=3)\n\nSince in PG, we can define different indexes among partitions, so each\nchild may\nhave different UniqueKeys, and some of them might be invalidated in parent's\nlevel. For example, 1). we only define unique index p1(c), so (c) would be\nan\nuniquekey of p1 only. so it is invalidate on appendrel level. 2). We\ndefine\nunique index p_n(c) on each childrel, so every childrel has UniqueKey\n(c). However it is invalid on appendrel as well. 3). We define unique index\np_n(a, c), since a is the partition key, so (a, c) would be valid for both\nchild rel and parent rel.\n\n\nIn my v1 implementation[1] before, I maintained the child rel exactly the\nsame as\nnon-partitioned table. But when calculating the UniqueKey for partitioned\ntable, I first introduce a global_unique_indexlist which handles the above 3\ncases. The indexes for case 1 and case 2 will not be in\nglobal_unique_indexlist\nbut the index in case 3 will be even if they are only built in child level.\nAfter\nwe have build the global_unique_indexlist on appendrel, we will build the\nUnqiueKey exactly same as non partitioned table. With this way, I'm not\nhappy\nwith the above method now is because 1). the global_unique_indexlist is\nbuild in\na hard way. 2). I have to totally ignored the UniqueKey on child level and\nre-compute it on appendrel level. 3). The 3 cases should rarely happen in\nreal\nlife, I guess.\n\nWhen I re-implement the UniqueKey with EquivalenceClass, I re-think about\nhow to\nhandle the above stuff. Now my preferred idea is just not handle it. When\nbuilding the\nuniquekey on parent rel, we just handle 2 cases. If the appendrel only have\n1\nchild, we just copy (and modified if needed due to col-order-mismatch case)\nthe\nuniquekey. 2). Only handle the Unique index defined in top level, for this\ncase\nit would yield the below situation.\n\ncreate unique index on p(a, b); --> (A, B) will be the UniqueKey of p.\ncreate unique index on p_nn(a, b); --> (A, B) will not be the UniqueKey of p\neven we create it on ALL the child rel. The result is not perfect but I\nthink\nit is practical. Any suggestions?\n\nThe attached is a UnqiueKey with EquivalenceClass patch, I just complete the\nsingle relation part and may have bugs. I just attached it here for design\nreview only. and the not-null-attrs is just v1 which we can continue\ndiscussing on\nthe original thread[2].\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWpQjAqJwQ2X-aR9g3+ZHRzU1k8hNP7A+_mLuOv-n5aVKA@mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)", "msg_date": "Sat, 20 Feb 2021 10:25:59 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "UniqueKey on Partitioned table." }, { "msg_contents": "> On Sat, Feb 20, 2021 at 10:25:59AM +0800, Andy Fan wrote:\n>\n> The attached is a UnqiueKey with EquivalenceClass patch, I just complete the\n> single relation part and may have bugs. I just attached it here for design\n> review only. and the not-null-attrs is just v1 which we can continue\n> discussing on the original thread[2].\n\nThanks for the patch. After a short look through it I'm a bit confused\nand wanted to clarify, now uniquekeys list could contain both Expr and\nEquivalenceClass?\n\n\n", "msg_date": "Fri, 26 Mar 2021 20:10:36 +0100", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Sat, Mar 27, 2021 at 3:07 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Sat, Feb 20, 2021 at 10:25:59AM +0800, Andy Fan wrote:\n> >\n> > The attached is a UnqiueKey with EquivalenceClass patch, I just complete\n> the\n> > single relation part and may have bugs. I just attached it here for\n> design\n> > review only. and the not-null-attrs is just v1 which we can continue\n> > discussing on the original thread[2].\n>\n> Thanks for the patch. After a short look through it I'm a bit confused\n> and wanted to clarify, now uniquekeys list could contain both Expr and\n> EquivalenceClass?\n>\n\nYes, That's because I don't want to create a new EquivalenceClass (which\nwould make the PlannerInfo->eq_classes longer) if we don't have\none , then I just used one Expr instead for this case. However during the\ntest, I found some EquivalenceClass with only 1 EquivalenceMember\nunexpectedly.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Sat, Mar 27, 2021 at 3:07 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Sat, Feb 20, 2021 at 10:25:59AM +0800, Andy Fan wrote:\n>\n> The attached is a UnqiueKey with EquivalenceClass patch, I just complete the\n> single relation part and may have bugs. I just attached it here for design\n> review only. and the not-null-attrs is just v1 which we can continue\n> discussing on the original thread[2].\n\nThanks for the patch. After a short look through it I'm a bit confused\nand wanted to clarify, now uniquekeys list could contain both Expr and\nEquivalenceClass?Yes,  That's because I don't want to create a new EquivalenceClass (whichwould make the PlannerInfo->eq_classes longer) if we don't haveone , then I just used one Expr instead for this case.  However during thetest, I found some EquivalenceClass with only 1 EquivalenceMemberunexpectedly. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sat, 27 Mar 2021 14:14:25 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Sat, Mar 27, 2021 at 11:44 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Sat, Mar 27, 2021 at 3:07 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>>\n>> > On Sat, Feb 20, 2021 at 10:25:59AM +0800, Andy Fan wrote:\n>> >\n>> > The attached is a UnqiueKey with EquivalenceClass patch, I just complete the\n>> > single relation part and may have bugs. I just attached it here for design\n>> > review only. and the not-null-attrs is just v1 which we can continue\n>> > discussing on the original thread[2].\n>>\n>> Thanks for the patch. After a short look through it I'm a bit confused\n>> and wanted to clarify, now uniquekeys list could contain both Expr and\n>> EquivalenceClass?\n>\n>\n> Yes, That's because I don't want to create a new EquivalenceClass (which\n> would make the PlannerInfo->eq_classes longer) if we don't have\n> one , then I just used one Expr instead for this case.\n> However during the\n> test, I found some EquivalenceClass with only 1 EquivalenceMember\n> unexpectedly.\n>\n\nPathkeys may induce single member ECs. Why UniqueKeys are an exception?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 29 Mar 2021 18:56:59 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Tue, 30 Mar 2021 at 02:27, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Sat, Mar 27, 2021 at 11:44 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > On Sat, Mar 27, 2021 at 3:07 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> >> Thanks for the patch. After a short look through it I'm a bit confused\n> >> and wanted to clarify, now uniquekeys list could contain both Expr and\n> >> EquivalenceClass?\n> >\n> >\n> > Yes, That's because I don't want to create a new EquivalenceClass (which\n> > would make the PlannerInfo->eq_classes longer) if we don't have\n> > one , then I just used one Expr instead for this case.\n> > However during the\n> > test, I found some EquivalenceClass with only 1 EquivalenceMember\n> > unexpectedly.\n> >\n>\n> Pathkeys may induce single member ECs. Why UniqueKeys are an exception?\n\nI doubt that it should be. get_eclass_for_sort_expr() makes\nsingle-member ECs for sorts. I imagine the UniqueKey stuff should\ncopy that... However, get_eclass_for_sort_expr() can often dominate\nthe planning effort in queries to partitioned tables with a large\nnumber of partitions when the query has an ORDER BY. Perhaps Andy is\ntrying to sidestep that issue?\n\nI mentioned a few things in [1] on what I think about this.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoDMyw=hTuW-258yqNK4bhW6CpguJU_GZBh4x+rnoem3w@mail.gmail.com\n\n\n", "msg_date": "Tue, 30 Mar 2021 09:16:42 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Tue, Mar 30, 2021 at 4:16 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 30 Mar 2021 at 02:27, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Sat, Mar 27, 2021 at 11:44 AM Andy Fan <zhihui.fan1213@gmail.com>\n> wrote:\n> > >\n> > > On Sat, Mar 27, 2021 at 3:07 AM Dmitry Dolgov <9erthalion6@gmail.com>\n> wrote:\n> > >> Thanks for the patch. After a short look through it I'm a bit confused\n> > >> and wanted to clarify, now uniquekeys list could contain both Expr and\n> > >> EquivalenceClass?\n> > >\n> > >\n> > > Yes, That's because I don't want to create a new EquivalenceClass\n> (which\n> > > would make the PlannerInfo->eq_classes longer) if we don't have\n> > > one , then I just used one Expr instead for this case.\n> > > However during the\n> > > test, I found some EquivalenceClass with only 1 EquivalenceMember\n> > > unexpectedly.\n> > >\n> >\n> > Pathkeys may induce single member ECs. Why UniqueKeys are an exception?\n>\n>\nWhen working with UniqueKey, I do want to make PlannerInfo.eq_classes short,\nso I don't want to create a new EC for UniqueKey only. After I realized we\nhave\nso single-member ECs, I doubt if the \"Expr in UniqueKey\" will be executed\nin real.\nI still didn't get enough time to do more research about this.\n\nI doubt that it should be. get_eclass_for_sort_expr() makes\n> single-member ECs for sorts.\n\n\nThanks for this hint. I can check more cases like this.\n\n\n> I imagine the UniqueKey stuff should\n> copy that... However, get_eclass_for_sort_expr() can often dominate\n\nthe planning effort in queries to partitioned tables with a large\n> number of partitions when the query has an ORDER BY. Perhaps Andy is\n> trying to sidestep that issue?\n>\n\nYes. a long PlannerInfo.eq_classes may make some finding slow, and in\nmy UniqueKey patch, I am trying to not make it longer.\n\n\n> I mentioned a few things in [1] on what I think about this.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvoDMyw=hTuW-258yqNK4bhW6CpguJU_GZBh4x+rnoem3w@mail.gmail.com\n>\n\nI appreciate all of the people who helped on this patch and others. I would\nlike to share more of my planning. As for the UniqueKey patch, there are\nsome\ndesign decisions that need to be made. In my mind, the order would be:\na). How to present the notnullattrs probably in [1] b). How to present\nthe element\nin UniqueKey. Prue EquivalenceClasses or Mix of Expr and EquivalenceClass\nas\nwe just talked about. c). How to maintain the UniqueKey Partitioned table\nin the\nbeginning of this thread. As for a) & c). I have my current proposal for\ndiscussion.\nas for b) I think I need more thinking about this. Based on the idea\nabove, I am\nnot willing to move too fast on the following issue unless the previous\nissue\ncan be addressed. Any feedback/suggestion about my current planning is\nwelcome.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWp3WKyrMKNdg46BvQRD7xkNL9UsLLcLhd5ao%3DFSbnaN_Q%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Mar 30, 2021 at 4:16 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 30 Mar 2021 at 02:27, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Sat, Mar 27, 2021 at 11:44 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > On Sat, Mar 27, 2021 at 3:07 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> >> Thanks for the patch. After a short look through it I'm a bit confused\n> >> and wanted to clarify, now uniquekeys list could contain both Expr and\n> >> EquivalenceClass?\n> >\n> >\n> > Yes,  That's because I don't want to create a new EquivalenceClass (which\n> > would make the PlannerInfo->eq_classes longer) if we don't have\n> > one , then I just used one Expr instead for this case.\n> > However during the\n> > test, I found some EquivalenceClass with only 1 EquivalenceMember\n> > unexpectedly.\n> >\n>\n> Pathkeys may induce single member ECs. Why UniqueKeys are an exception?\nWhen working with UniqueKey, I do want to make PlannerInfo.eq_classes short,so I don't want to create a new EC for UniqueKey only.  After I realized we haveso single-member ECs, I doubt if the \"Expr in UniqueKey\" will be executed in real.I still didn't get enough time to do more research about this. \nI doubt that it should be. get_eclass_for_sort_expr() makes\nsingle-member ECs for sorts. Thanks for this hint.  I can check more cases like this.   I imagine the UniqueKey stuff should\ncopy that... However, get_eclass_for_sort_expr() can often dominate\nthe planning effort in queries to partitioned tables with a large\nnumber of partitions when the query has an ORDER BY. Perhaps Andy is\ntrying to sidestep that issue?Yes.  a long PlannerInfo.eq_classes may make some finding slow, and inmy UniqueKey patch,  I am trying to not make it longer.  \nI mentioned a few things in [1] on what I think about this.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvoDMyw=hTuW-258yqNK4bhW6CpguJU_GZBh4x+rnoem3w@mail.gmail.com\nI appreciate all of the people who helped on this patch and others.  I wouldlike to share more of my planning.  As for the UniqueKey patch,  there are somedesign decisions that need to be made.  In my mind, the order would be:  a). How to present the notnullattrs probably in [1]  b).  How to present the elementin UniqueKey.  Prue EquivalenceClasses or Mix of Expr and EquivalenceClass aswe just talked about.  c). How to maintain the UniqueKey Partitioned table in thebeginning of this thread.  As for a) & c).  I have my current proposal for discussion.as for b) I think I need more thinking about this.  Based on the idea above, I am not willing to move too fast on the following issue unless the previous issue can be addressed.  Any feedback/suggestion about my current planning is welcome.[1] https://www.postgresql.org/message-id/CAKU4AWp3WKyrMKNdg46BvQRD7xkNL9UsLLcLhd5ao%3DFSbnaN_Q%40mail.gmail.com -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 30 Mar 2021 08:51:44 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "> b). How to present the element\n> in UniqueKey. Prue EquivalenceClasses or Mix of Expr and EquivalenceClass as\n> we just talked about.\nI think the reason we add ECs for sort expressions is to use\ntransitive relationship. The EC may start with a single member but\nlater in the planning that member might find partners which are all\nequivalent. Result ordered by one is also ordered by the other. The\nsame logic applies to UniqueKey as well, isn't it. In a result if a\nset of columns makes a row unique, the set of columns represented by\nthe other EC member should be unique. Though a key will start as a\nsingleton it might EC partners later and thus thus unique key will\ntransition to all the members. With that logic UniqueKey should use\njust ECs instead of bare expressions.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 31 Mar 2021 18:42:09 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Wed, Mar 31, 2021 at 9:12 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> > b). How to present the element\n> > in UniqueKey. Prue EquivalenceClasses or Mix of Expr and\n> EquivalenceClass as\n> > we just talked about.\n> I think the reason we add ECs for sort expressions is to use\n> transitive relationship. The EC may start with a single member but\n> later in the planning that member might find partners which are all\n> equivalent. Result ordered by one is also ordered by the other. The\n> same logic applies to UniqueKey as well, isn't it. In a result if a\n> set of columns makes a row unique, the set of columns represented by\n> the other EC member should be unique. Though a key will start as a\n> singleton it might EC partners later and thus thus unique key will\n> transition to all the members. With that logic UniqueKey should use\n> just ECs instead of bare expressions.\n>\n\nTBH, I haven't thought about this too hard, but I think when we build the\nUniqueKey, all the ECs have been built already. So can you think out an\ncase we start with an EC with a single member at the beginning and\nhave more members later for UniqueKey cases?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Wed, Mar 31, 2021 at 9:12 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:> b).  How to present the element\n> in UniqueKey.  Prue EquivalenceClasses or Mix of Expr and EquivalenceClass as\n> we just talked about.\nI think the reason we add ECs for sort expressions is to use\ntransitive relationship. The EC may start with a single member but\nlater in the planning that member might find partners which are all\nequivalent. Result ordered by one is also ordered by the other. The\nsame logic applies to UniqueKey as well, isn't it. In a result if a\nset of columns makes a row unique, the set of columns represented by\nthe other EC member should be unique. Though a key will start as a\nsingleton it might EC partners later and thus thus unique key will\ntransition to all the members. With that logic UniqueKey should use\njust ECs instead of bare expressions.TBH, I haven't thought about this too hard, but I think when we build theUniqueKey, all the ECs have been built already.  So can you think out ancase we start with an EC with a single member at the beginning andhave more members later for UniqueKey cases? -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Tue, 6 Apr 2021 18:31:02 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Tue, 6 Apr 2021 at 22:31, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Wed, Mar 31, 2021 at 9:12 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>> I think the reason we add ECs for sort expressions is to use\n>> transitive relationship. The EC may start with a single member but\n>> later in the planning that member might find partners which are all\n>> equivalent. Result ordered by one is also ordered by the other. The\n>> same logic applies to UniqueKey as well, isn't it. In a result if a\n>> set of columns makes a row unique, the set of columns represented by\n>> the other EC member should be unique. Though a key will start as a\n>> singleton it might EC partners later and thus thus unique key will\n>> transition to all the members. With that logic UniqueKey should use\n>> just ECs instead of bare expressions.\n>\n>\n> TBH, I haven't thought about this too hard, but I think when we build the\n> UniqueKey, all the ECs have been built already. So can you think out an\n> case we start with an EC with a single member at the beginning and\n> have more members later for UniqueKey cases?\n\nI don't really know if it matters which order things happen. We still\nend up with a single EC containing {a,b} whether we process ORDER BY b\nor WHERE a=b first.\n\nIn any case, the reason we want PathKeys to be ECs rather than just\nExprs is to allow cases such as the following to use an index to\nperform the sort.\n\n# create table ab (a int, b int);\n# create index on ab(a);\n# set enable_seqscan=0;\n# explain select * from ab where a=b order by b;\n QUERY PLAN\n---------------------------------------------------------------------\n Index Scan using ab_a_idx on ab (cost=0.15..83.70 rows=11 width=8)\n Filter: (a = b)\n(2 rows)\n\nOf course, we couldn't use this index to provide pre-sorted results if\n\"where a=b\" hadn't been there.\n\nDavid\n\n\n", "msg_date": "Tue, 6 Apr 2021 22:55:36 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Tue, Apr 6, 2021 at 6:55 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 6 Apr 2021 at 22:31, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > On Wed, Mar 31, 2021 at 9:12 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> >> I think the reason we add ECs for sort expressions is to use\n> >> transitive relationship. The EC may start with a single member but\n> >> later in the planning that member might find partners which are all\n> >> equivalent. Result ordered by one is also ordered by the other. The\n> >> same logic applies to UniqueKey as well, isn't it. In a result if a\n> >> set of columns makes a row unique, the set of columns represented by\n> >> the other EC member should be unique. Though a key will start as a\n> >> singleton it might EC partners later and thus thus unique key will\n> >> transition to all the members. With that logic UniqueKey should use\n> >> just ECs instead of bare expressions.\n> >\n> >\n> > TBH, I haven't thought about this too hard, but I think when we build the\n> > UniqueKey, all the ECs have been built already. So can you think out an\n> > case we start with an EC with a single member at the beginning and\n> > have more members later for UniqueKey cases?\n>\n> I don't really know if it matters which order things happen. We still\n> end up with a single EC containing {a,b} whether we process ORDER BY b\n> or WHERE a=b first.\n>\n\nI think it is time to talk about this again. Take the below query as example:\n\nSELECT * FROM t1, t2 WHERE t1.pk = t2.pk;\n\nThen when I populate_baserel_uniquekeys for t1, we already have\nEC{Members={t1.pk, t2.pk}} in root->eq_classes already. Then I use\nthis EC directly for t1's UniqueKey. The result is:\n\nT1's UniqueKey : [ EC{Members={t1.pk, t2.pk}} ].\n\n*Would this be OK since at the baserel level, the \"t1.pk = t2.pk\" is not\nexecuted yet?*\n\nI tried the below example to test how PathKey is maintained.\nCREATE TABLE t1 (a INT, b INT);\nCREATE TABLE t2 (a INT, b INT);\nCREATE INDEX ON t1(b);\n\nSELECT * FROM t1, t2 WHERE t1.b = t2.b and t1.b > 3;\n\nthen we can get t1's Path:\n\nIndex Scan on (b), PathKey.pk_class include 2 members (t1.b, t2.b}\neven before the Join.\n\nSo looks the answer for my question should be \"yes\"? Hope I have\nmade myself clear.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Sat, 17 Jul 2021 15:32:18 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Sat, 17 Jul 2021 at 19:32, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> SELECT * FROM t1, t2 WHERE t1.pk = t2.pk;\n>\n> Then when I populate_baserel_uniquekeys for t1, we already have\n> EC{Members={t1.pk, t2.pk}} in root->eq_classes already. Then I use\n> this EC directly for t1's UniqueKey. The result is:\n>\n> T1's UniqueKey : [ EC{Members={t1.pk, t2.pk}} ].\n>\n> *Would this be OK since at the baserel level, the \"t1.pk = t2.pk\" is not\n> executed yet?*\n>\n> I tried the below example to test how PathKey is maintained.\n> CREATE TABLE t1 (a INT, b INT);\n> CREATE TABLE t2 (a INT, b INT);\n> CREATE INDEX ON t1(b);\n>\n> SELECT * FROM t1, t2 WHERE t1.b = t2.b and t1.b > 3;\n>\n> then we can get t1's Path:\n>\n> Index Scan on (b), PathKey.pk_class include 2 members (t1.b, t2.b}\n> even before the Join.\n>\n> So looks the answer for my question should be \"yes\"? Hope I have\n> made myself clear.\n\nI don't see the problem. The reason PathKeys use EquivalenceClasses is\nso that queries like: SELECT * FROM tab WHERE a=b ORDER BY b; can see\nthat they're also ordered by a. This is useful because if there\nhappens to be an index on tab(a) then we can use it to provide the\nrequired ordering for this query.\n\nWe'll want the same with UniqueKeys. The same thing there looks like:\n\nCREATE TABLE tab (a int primary key, b int not null);\n\nselect distinct b from tab where a=b;\n\nSince we have the EquivalenceClass with {a,b} stored in the UniqueKey,\nthen we should be able to execute this without doing any distinct\noperation.\n\nDavid\n\n\n", "msg_date": "Sat, 17 Jul 2021 19:45:05 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: UniqueKey on Partitioned table." }, { "msg_contents": "On Sat, Jul 17, 2021 at 3:45 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 17 Jul 2021 at 19:32, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > SELECT * FROM t1, t2 WHERE t1.pk = t2.pk;\n> >\n> > Then when I populate_baserel_uniquekeys for t1, we already have\n> > EC{Members={t1.pk, t2.pk}} in root->eq_classes already. Then I use\n> > this EC directly for t1's UniqueKey. The result is:\n> >\n> > T1's UniqueKey : [ EC{Members={t1.pk, t2.pk}} ].\n> >\n> > *Would this be OK since at the baserel level, the \"t1.pk = t2.pk\" is not\n> > executed yet?*\n> >\n> > I tried the below example to test how PathKey is maintained.\n> > CREATE TABLE t1 (a INT, b INT);\n> > CREATE TABLE t2 (a INT, b INT);\n> > CREATE INDEX ON t1(b);\n> >\n> > SELECT * FROM t1, t2 WHERE t1.b = t2.b and t1.b > 3;\n> >\n> > then we can get t1's Path:\n> >\n> > Index Scan on (b), PathKey.pk_class include 2 members (t1.b, t2.b}\n> > even before the Join.\n> >\n> > So looks the answer for my question should be \"yes\"? Hope I have\n> > made myself clear.\n>\n> I don't see the problem.\n\nThanks for the double check, that removes a big blocker for my development.\nI'd submit a new patch very soon.\n\n> The reason PathKeys use EquivalenceClasses is\n> so that queries like: SELECT * FROM tab WHERE a=b ORDER BY b; can see\n> that they're also ordered by a. This is useful because if there\n> happens to be an index on tab(a) then we can use it to provide the\n> required ordering for this query.\n>\n> We'll want the same with UniqueKeys. The same thing there looks like:\n>\n> CREATE TABLE tab (a int primary key, b int not null);\n>\n> select distinct b from tab where a=b;\n>\n> Since we have the EquivalenceClass with {a,b} stored in the UniqueKey,\n> then we should be able to execute this without doing any distinct\n> operation.\n>\n> David\n\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Sun, 18 Jul 2021 03:38:58 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UniqueKey on Partitioned table." } ]
[ { "msg_contents": "Hello,\n\nAndrew Gierth pointed out that I left behind some outdated advice\nabout RAID spindles in the GUC's extra description field, in commit\nb09ff536. Let's just drop that description. Patch attached.", "msg_date": "Sat, 20 Feb 2021 21:28:39 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Outdated description for effective_io_concurrency" }, { "msg_contents": "On Sat, Feb 20, 2021 at 09:28:39PM +1300, Thomas Munro wrote:\n> Hello,\n> \n> Andrew Gierth pointed out that I left behind some outdated advice\n> about RAID spindles in the GUC's extra description field, in commit\n> b09ff536. Let's just drop that description. Patch attached.\n\n+1.\n\n\n", "msg_date": "Sat, 20 Feb 2021 17:07:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Outdated description for effective_io_concurrency" } ]
[ { "msg_contents": "Hi,\n\nCan someone please tell me which of these two queries gives the correct result and which one the incorrect?\n\n/* * * *\n * dT in days for 1000 samples\n */\n\n// 2.922 (&)\nwith A1 as ( select make_interval (0, 0, 0, 0, 0, 0, ( extract ( epoch from interval '8 years' ) / 1000 ) ) as \"00\" ) select ( extract ( hours from \"00\" ) +\nextract ( minutes from \"00\" ) / 60 + extract ( seconds from \"00\" ) / 3600 ) / 24 as dT from A1;\n\n// 2.88 (X)\nwith A1 as ( select interval '8 years' / 1000 as \"00\" ) select extract ( days from \"00\" ) + extract ( hours from \"00\" ) / 24 + extract ( minutes from \"00\" ) /\n1440 + extract ( seconds from \"00\" ) / 86400 as dT from A1;\n\nPersonally I think only the first one gives the correct answer.\n\nBest regards,\nMischa.\n\n\n\n", "msg_date": "Sat, 20 Feb 2021 10:27:20 +0100", "msg_from": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu>", "msg_from_op": true, "msg_subject": "computing dT from an interval" }, { "msg_contents": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu> writes:\n> Can someone please tell me which of these two queries gives the correct result and which one the incorrect?\n\n> // 2.922 (&)\n> with A1 as ( select make_interval (0, 0, 0, 0, 0, 0, ( extract ( epoch from interval '8 years' ) / 1000 ) ) as \"00\" ) select ( extract ( hours from \"00\" ) +\n> extract ( minutes from \"00\" ) / 60 + extract ( seconds from \"00\" ) / 3600 ) / 24 as dT from A1;\n\n> // 2.88 (X)\n> with A1 as ( select interval '8 years' / 1000 as \"00\" ) select extract ( days from \"00\" ) + extract ( hours from \"00\" ) / 24 + extract ( minutes from \"00\" ) /\n> 1440 + extract ( seconds from \"00\" ) / 86400 as dT from A1;\n\nThey'e both \"incorrect\", for some value of \"incorrect\". Quantities like\nyears, days, and seconds don't interconvert freely, which is why the\ninterval datatype tries to keep them separate.\n\nIn the first case, the main approximation is introduced when you do\n\nselect extract ( epoch from interval '8 years' );\n date_part \n-----------\n 252460800\n(1 row)\n\nIf you do the math, you'll soon see that that corresponds to assuming\n365.25 days (of 86400 seconds each) per year. So that's already wrong;\nno year contains fractional days.\n\nIn the second case, the trouble starts with \n\nselect interval '8 years' / 1000;\n ?column? \n-----------------\n 2 days 21:07:12\n(1 row)\n\nInternally, '8 years' is really 96 months, but to divide by 1000 we\nhave to down-convert that into the lesser units of days and seconds.\nThe approximation that's used for that is that months have 30 days,\nso we initially get 2.88 days, and then the 0.88 days part is\nconverted to 76032 seconds.\n\nSo yeah, you can poke a lot of holes in these choices, but different\nchoices would just be differently inconsistent. The Gregorian calendar\nis not very rational.\n\nPersonally I stay away from applying interval multiplication/division\nto anything except intervals expressed in seconds. As soon as you\nget into the larger units, you're forced to make unsupportable\nassumptions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Feb 2021 11:20:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: computing dT from an interval" }, { "msg_contents": "On Sat, 2021-02-20 at 11:20 -0500, Tom Lane wrote:\n> \"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu> writes:\n> > Can someone please tell me which of these two queries gives the correct result and which one the incorrect?\n> > // 2.922 (&)\n> > with A1 as ( select make_interval (0, 0, 0, 0, 0, 0, ( extract ( epoch from interval '8 years' ) / 1000 ) ) as \"00\" ) select ( extract ( hours from \"00\" ) +\n> > extract ( minutes from \"00\" ) / 60 + extract ( seconds from \"00\" ) / 3600 ) / 24 as dT from A1;\n> > // 2.88 (X)\n> > with A1 as ( select interval '8 years' / 1000 as \"00\" ) select extract ( days from \"00\" ) + extract ( hours from \"00\" ) / 24 + extract ( minutes from \"00\" )\n> > /\n> > 1440 + extract ( seconds from \"00\" ) / 86400 as dT from A1;\n> \n> They'e both \"incorrect\", for some value of \"incorrect\". Quantities like\n> years, days, and seconds don't interconvert freely, which is why the\n> interval datatype tries to keep them separate.\n> \n> In the first case, the main approximation is introduced when you do\n> \n> select extract ( epoch from interval '8 years' );\n> date_part \n> -----------\n> 252460800\n> (1 row)\n> \n> If you do the math, you'll soon see that that corresponds to assuming\n> 365.25 days (of 86400 seconds each) per year. So that's already wrong;\n> no year contains fractional days.\n\nI don't see the problem in this, we have 6 years of 365 days and 2 years of 366 days. Using this dt, we can compute a set of equidistant time stamps with their\ncorresponding values. Only the first and last row need to be on certain specific points in time, n * dt for certain n does not need to end up on january 1st for\neach year in the interval.\n\nActually I only need to know for the moment, if dt spans more or less than one week and more or less than one day :)\n\n> \n> In the second case, the trouble starts with \n> \n> select interval '8 years' / 1000;\n> ?column? \n> -----------------\n> 2 days 21:07:12\n> (1 row)\n> \n> Internally, '8 years' is really 96 months, but to divide by 1000 we\n> have to down-convert that into the lesser units of days and seconds.\n> The approximation that's used for that is that months have 30 days,\n> so we initially get 2.88 days, and then the 0.88 days part is\n> converted to 76032 seconds.\n> \n> So yeah, you can poke a lot of holes in these choices, but different\n> choices would just be differently inconsistent. The Gregorian calendar\n> is not very rational.\n> \n> Personally I stay away from applying interval multiplication/division\n> to anything except intervals expressed in seconds. As soon as you\n> get into the larger units, you're forced to make unsupportable\n> assumptions.\n\nSo how do you compute the number of seconds in 8 years?\n\n> \n\nI really think the first one does give the correct answer. The only thing is that the second one, the most trivial one of the two, does not give the same answer\nas the first. They should have returned exactly the same number if you ask me.\n\n\n> \t\t\tregards, tom lane\n> \n\nRegards,\nMischa Baars.\n\n\n\n", "msg_date": "Mon, 22 Feb 2021 10:30:39 +0100", "msg_from": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu>", "msg_from_op": true, "msg_subject": "Re: computing dT from an interval" }, { "msg_contents": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu> writes:\n> So how do you compute the number of seconds in 8 years?\n\nIMO, that's a meaningless computation, because the answer is not fixed.\nBefore you claim otherwise, think about the every-four-hundred-years\nleap year exception in the Gregorian rules. Besides, what if the\nquestion is \"how many seconds in 7 years\"? Then it definitely varies\ndepending on the number of leap days included.\n\nWhat does make sense is timestamp subtraction, where the actual\nendpoints of the interval are known.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Feb 2021 10:52:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: computing dT from an interval" }, { "msg_contents": "On Mon, Feb 22, 2021 at 10:52:42AM -0500, Tom Lane wrote:\n> \"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu> writes:\n> > So how do you compute the number of seconds in 8 years?\n> \n> IMO, that's a meaningless computation, because the answer is not fixed.\n> Before you claim otherwise, think about the every-four-hundred-years\n> leap year exception in the Gregorian rules. Besides, what if the\n> question is \"how many seconds in 7 years\"? Then it definitely varies\n> depending on the number of leap days included.\n> \n> What does make sense is timestamp subtraction, where the actual\n> endpoints of the interval are known.\n\nTrue.\n\nI'm not sure whether this is a bug or an infelicity we document, but\nat least in some parts of the world, this calculation doesn't comport\nwith the calendar in place at the time:\n\nSELECT to_timestamp('1753', 'YYYY') - to_timestamp('1752', 'YYYY');\n ?column? \n══════════\n 366 days\n(1 row)\n\nI'd like to imagine nobody will ever go mucking with the calendar to\nthe extent the British did that year, but one never knows.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 22 Feb 2021 17:11:06 +0100", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: computing dT from an interval" }, { "msg_contents": "David Fetter <david@fetter.org> writes:\n> I'm not sure whether this is a bug or an infelicity we document, but\n> at least in some parts of the world, this calculation doesn't comport\n> with the calendar in place at the time:\n> SELECT to_timestamp('1753', 'YYYY') - to_timestamp('1752', 'YYYY');\n\nYeah, Appendix B.6 mentions that.\n\nWhat isn't documented, and maybe should be, is the weird results\nyou get from the tzdata info for years before standardized time\nzones came into use.\n\nregression=# show timezone;\n TimeZone \n------------------\n America/New_York\n(1 row)\n\nregression=# select '2020-01-01 00:00'::timestamptz;\n timestamptz \n------------------------\n 2020-01-01 00:00:00-05\n(1 row)\n\nregression=# select '1800-01-01 00:00'::timestamptz;\n timestamptz \n------------------------------\n 1800-01-01 00:00:00-04:56:02\n(1 row)\n\nIf you're wondering where the heck that came from, it corresponds\nto the actual longitude of New York City, i.e. local mean solar time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Feb 2021 11:29:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: computing dT from an interval" }, { "msg_contents": "On Mon, 2021-02-22 at 10:52 -0500, Tom Lane wrote:\n> \"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu> writes:\n> > So how do you compute the number of seconds in 8 years?\n> \n> IMO, that's a meaningless computation, because the answer is not fixed.\n> Before you claim otherwise, think about the every-four-hundred-years\n> leap year exception in the Gregorian rules. Besides, what if the\n> question is \"how many seconds in 7 years\"? Then it definitely varies\n> depending on the number of leap days included.\n> \n> What does make sense is timestamp subtraction, where the actual\n> endpoints of the interval are known.\n> \n> \t\t\tregards, tom lane\n> \n> \nThere you have a point. Strange then that you get an answer other than 'undefined' when subtracting x - y, where y is undefined until x is defined, but you are\ncompletely right. An interval of 8 years doesn't count a fixed number of seconds.\n\nThanks,\nMischa.\n\n\n\n", "msg_date": "Tue, 23 Feb 2021 09:23:21 +0100", "msg_from": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu>", "msg_from_op": true, "msg_subject": "Re: computing dT from an interval" } ]
[ { "msg_contents": "Our documentation says specifically \"A character class cannot be used\nas an endpoint of a range.\" This should apply to the character class\nshorthand escapes (\\d and so on) too, and for the most part it does:\n\n# select 'x' ~ '[\\d-a]';\nERROR: invalid regular expression: invalid character range\n\nHowever, certain combinations involving \\w don't throw any error:\n\n# select 'x' ~ '[\\w-a]';\n ?column? \n----------\n t\n(1 row)\n\nwhile others do:\n\n# select 'x' ~ '[\\w-;]';\nERROR: invalid regular expression: invalid character range\n\nIt turns out that what's happening here is that \\w is being\nmacro-expanded into \"[:alnum:]_\" (see the brbackw[] constant\nin regc_lex.c), so then we have\n\nselect 'x' ~ '[[:alnum:]_-a]';\n\nand that's valid as long as '_' is less than the trailing\nrange bound. The fact that we're using REG_ERANGE for both\n\"range syntax botch\" and \"range start is greater than range\nend\" helps to mask the fact that the wrong thing is happening,\ni.e. my last example above is giving the right error string\nfor the wrong reason.\n\nI thought of changing the expansion to \"_[:alnum:]\" but of\ncourse that just moves the problem around: then some cases\nwith \\w after a dash would be accepted when they shouldn't be.\n\nI have a patch in progress that gets rid of the hokey macro\nexpansion implementation of \\w and friends, and I noticed\nthis issue because it started to reject \"[\\w-_]\", which our\nexisting code accepts. There's a bunch of examples like that\nin Joel's Javascript regex corpus. I suspect that Javascript\nis reading such cases as \"\\w plus the literal characters '-'\nand '_'\", but I'm not 100% sure of that.\n\nAnyway, I don't see any non-invasive way to fix this in the\nback branches, and I'm not sure that anyone would appreciate\nour changing it in stable branches anyway. But I wanted to\ndocument the issue for the record.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Feb 2021 17:20:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "On Sat, Feb 20, 2021, at 23:20, Tom Lane wrote:\n>I have a patch in progress that gets rid of the hokey macro\n>expansion implementation of \\w and friends, and I noticed\n>this issue because it started to reject \"[\\w-_]\", which our\n>existing code accepts. There's a bunch of examples like that\n>in Joel's Javascript regex corpus. I suspect that Javascript\n>is reading such cases as \"\\w plus the literal characters '-'\n>and '_'\", but I'm not 100% sure of that.\n\nIn an attempt trying to demystify how \\w works in various regex engines,\nI created a test to deduce the matching ranges for a given bracket expression.\n\nIn the ASCII mode, it just tries all characters between 1...255:\n\n regex | engine | deduced_ranges\n------------+--------+-------------------------------\n^([a-z])$ | pg | [a-z]\n^([a-z])$ | pl | [a-z]\n^([a-z])$ | v8 | [a-z]\n^([\\d-a])$ | pg |\n^([\\d-a])$ | pl | [-0-9a]\n^([\\d-a])$ | v8 | [-0-9a]\n^([\\w-;])$ | pg |\n^([\\w-;])$ | pl | [-0-9;A-Z_a-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w-;])$ | v8 | [-0-9;A-Z_a-z]\n^([\\w-_])$ | pg | [0-9A-Z_a-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w-_])$ | pl | [-0-9A-Z_a-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w-_])$ | v8 | [-0-9A-Z_a-z]\n^([\\w])$ | pg | [0-9A-Z_a-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w])$ | pl | [0-9A-Z_a-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w])$ | v8 | [0-9A-Z_a-z]\n^([\\W])$ | pg |\n^([\\W])$ | pl | [\\x01-/:-@[-^`{-©«-´¶-¹»-¿×÷]\n^([\\W])$ | v8 | [\\x01-/:-@[-^`{-ÿ]\n^([\\w-a])$ | pg | [0-9A-Z_-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w-a])$ | pl | [-0-9A-Z_a-zªµºÀ-ÖØ-öø-ÿ]\n^([\\w-a])$ | v8 | [-0-9A-Z_a-z]\n\nIn the UTF8 mode, it generates a 10000 random valid UTF-8 byte sequences converted to text.\nThis will of course leave a lot of gaps, but one gets the idea on what ranges there are.\n\n regex | engine | deduced_ranges\n------------+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------\n^([a-z])$ | pg | [a-z]\n^([a-z])$ | pl | [a-z]\n^([a-z])$ | v8 | [a-z]\n^([\\d-a])$ | pg | ERROR\n^([\\d-a])$ | pl | [-0-9a٤-٦٩۲-۴۶۸-۹߀-߃߅߇०२४९১-২৫-৬੦੩-੪੬੯૧૭ ... 5 chars ... -୯௧௩௫౦౩౯೧೩-೫೯൧൪-൯෧෫໐໖-໗໙༡-༢២᠔᥎᪁᮵258-9𐒨𝟿]\n^([\\d-a])$ | v8 | [-0-9a]\n^([\\w-;])$ | pg | ERROR\n^([\\w-;])$ | pl | [-0-9;A-Z_a-zÀÂÆÌÎ-ÐÔÙ-Úß-áéëîñó-õø-ùûýÿ ... 3901 chars ... 𭈞𭈢𭈴𭑇𭒐𭕵𭖋𭙋𭟞𭢘𭥋𭥬𭧊𭧝𭫘𭯙𭯟𭶾𭷵𭸴𭹊𭻚𭼁𭽁𭾠𮄖𮅮𮉵𮏲𮕙𮛣𮝎𮣂𮥑𮪨忹殺灊鏹]\n^([\\w-;])$ | v8 | [-0-9;A-Z_a-z]\n^([\\w-_])$ | pg | [0-9A-Z_a-zªÁÆ-ÇÍ-ÒÔÙ-ÚÜÞáä-æèë-ìî-ïñõùý ... 3704 chars ... 𭍱𭓆𭓡𭕆𭖋𭖮𭘤𭙬𭣯𭦞𭬍𭭈𭲌𭶓𭶶𭷻𭹣𭹩𭼪𭾘𭿡𮄄𮄿𮆟𮆢𮇴𮋬𮍠𮏕𮒹𮜒𮝒𮡺𮦐𮨲𮩣𡛪韠𪊑]\n^([\\w-_])$ | pl | [-0-9A-Z_a-zªµÀ-ÁÅÈÊÑÓÕ-ÖØÚà-áã-æê-ìîð-ó ... 3884 chars ... 𭙐𭙥𭛏𭜆𭝃𭞗𭟺𭠼𭥮𭧕𭧙𭫢𭯛𭲠𭷱𭸡𭾉𮁣𮃦𮄫𮈔𮉞𮊀𮑳𮕝𮘊𮘚𮛍𮣝𮧕𮩺𮪇𮬊𮬡𡬘㩬茝鄛󠇂]\n^([\\w-_])$ | v8 | [-0-9A-Z_a-z]\n^([\\w])$ | pg | [0-9A-Z_a-zÃÇÉ-ÊÍ-ÎÐÒÖÙÛ-Þà-âåêî-ðò-ôöøú ... 3803 chars ... 𭏟-𭏠𭗷𭘱𭚆𭛿𭝵𭡓𭢕𭩪𭬞𭭆𭭾𭮺𭯌𭰅𭱇𭲩𭶧𭷡𭹿𭺟𮀑𮆔𮇩𮇰𮈯𮋷𮌜𮌨𮞄-𮞅𮩧𮫷𮬕𮮿舁]\n^([\\w])$ | pl | [0-9A-Z_a-zºÁÄ-ÆÉÍ-ÎÐÓ-ÔÖÙÛ-àâ-æéíð-ñø-ù ... 3881 chars ... 𭙗𭙳𭛨𭞌𭣘𭤁𭥖𭥜𭥷𭦋𭧺𭯊𭸘𭹍𭼷𭿰𮁵𮈅𮈇𮊩𮖛𮖹𮘠𮚞𮜞𮝀𮟟𮡖𮣝𮦖𮦘𮧏𮬅𮭁𮮟𮯓𦾱嶲󠇋]\n^([\\w])$ | v8 | [0-9A-Z_a-z]\n^([\\W])$ | pg | ERROR\n^([\\W])$ | pl | [\\x01-/:-@[-^`{-\\x7F\\u0085-\\u0089\\u008B-\\u008C\\u008E-\\u0092\\u0098¥-§©«-¯±-²¸×˄-˅ ... 4264 chars ... 􏞢􏟆􏟐􏟘􏢄􏣢􏥭􏦡􏧎􏧰􏩤􏪃􏪠􏪵􏫎􏫤􏬌􏭇􏭴􏭷􏮩􏮷􏯭􏯴􏯾􏰬􏲡􏲾􏳧􏳵􏵡􏶾􏷤􏷫􏹶􏺷􏼁􏽷􏿵]\n^([\\W])$ | v8 | [\\x01-/:-@[-^`{-\\u0080\\u0084\\u0087\\u008C\\u008F\\u0091\\u0096\\u009A -¡¥§ª-«®-°²-³µ¹¿ÁÄ ... 4855 chars ... -BGJLQT-Ubgkr-sy}「-」ェャスハホムᄀ-ᄁ좌￐ᅭᅵ￧↑￾]\n^([\\w-a])$ | pg | [0-9A-Z_-zªºÁ-ÃÇÌ-ÎÐ-ÑÔÖÝâ-ãå-æé-êìî-ñõü ... 3717 chars ... 𭝕𭟞𭡂𭡶𭤇𭥷𭦃𭧝𭮄-𭮅𭳐𭴁𭵦𭷥𭸍𭾙𭿘𮅕𮅳𮆈𮍪𮚝𮛶𮜠𮝁𮠦𮣆𮣼𮥴𮨨𮭘𮮛仌壮望-朡變]\n^([\\w-a])$ | pl | [-0-9A-Z_a-zºÁÃÇÉ-ÊÏÒ-ÔÖØÚ-ÛÞáäæí-ïõúü-ý ... 3854 chars ... 𭏇𭒧𭔃𭔽𭙟𭞽𭡖𭢮𭢱𭤙𭤶𭧝𭪁𭪻𭯰𭰭𭲟𭳚𭵊𭵽𭸷𭾏𮂗𮃴𮈄𮋝𮌫𮍏𮚅𮞞𮠾𮡊𮡿𮢐𮨍兤潮䏕𩅅]\n^([\\w-a])$ | v8 | [-0-9A-Z_a-z]\n\npg=PostgreSQL\npl=Perl\nv8=Javascript\n\nI think the use of \\w and \\W should be considered an anti-pattern when writing regexes, in any language,\ndue to the apparent variations between popular engines. It will never be obvious to neither the reader\nnor writer of the regex what was meant or what it means.\n\n/Joel", "msg_date": "Sun, 21 Feb 2021 08:13:11 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "On 2021-Feb-21, Joel Jacobson wrote:\n\n> regex | engine | deduced_ranges\n> ------------+--------+-------------------------------\n> ^([a-z])$ | pg | [a-z]\n> ^([a-z])$ | pl | [a-z]\n> ^([a-z])$ | v8 | [a-z]\n> ^([\\d-a])$ | pg |\n> ^([\\d-a])$ | pl | [-0-9a]\n> ^([\\d-a])$ | v8 | [-0-9a]\n> ^([\\w-;])$ | pg |\n> ^([\\w-;])$ | pl | [-0-9;A-Z_a-z����-��-��-�]\n> ^([\\w-;])$ | v8 | [-0-9;A-Z_a-z]\n> ^([\\w-_])$ | pg | [0-9A-Z_a-z����-��-��-�]\n> ^([\\w-_])$ | pl | [-0-9A-Z_a-z����-��-��-�]\n> ^([\\w-_])$ | v8 | [-0-9A-Z_a-z]\n> ^([\\w])$ | pg | [0-9A-Z_a-z����-��-��-�]\n> ^([\\w])$ | pl | [0-9A-Z_a-z����-��-��-�]\n> ^([\\w])$ | v8 | [0-9A-Z_a-z]\n> ^([\\W])$ | pg |\n> ^([\\W])$ | pl | [\\x01-/:-@[-^`{-��-��-��-���]\n> ^([\\W])$ | v8 | [\\x01-/:-@[-^`{-�]\n> ^([\\w-a])$ | pg | [0-9A-Z_-z����-��-��-�]\n> ^([\\w-a])$ | pl | [-0-9A-Z_a-z����-��-��-�]\n> ^([\\w-a])$ | v8 | [-0-9A-Z_a-z]\n\nIt looks like the interpretation of these other engines is that [\\d-a]\nis the set of \\d, the literal character \"-\", and the literal character\n\"a\". In other words, the - preceded by \\d or \\w (or any other character\nclass, I guess?) loses its special meaning of identifying a character\nrange.\n\nThis one I didn't understand:\n> ^([\\W])$ | pg |\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... �Qui�n es el machito que tendr�a carnet?\" (Mafalda)\n\n\n", "msg_date": "Sun, 21 Feb 2021 13:06:51 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> It looks like the interpretation of these other engines is that [\\d-a]\n> is the set of \\d, the literal character \"-\", and the literal character\n> \"a\". In other words, the - preceded by \\d or \\w (or any other character\n> class, I guess?) loses its special meaning of identifying a character\n> range.\n\nYeah. While I can see the attraction of being picky about this,\nI can also see the attraction of being more compatible with other\nengines. Should we relax this?\n\nA quick experiment with perl shows that its opinion is \"if the\natom before or after a potentially range-defining dash is a\ncharacter class, then take the dash as an ordinary character\".\n(This confirms Joel's result, and also I found that e.g. [3-\\w]\ntreats the dash as a literal character.)\n\n> This one I didn't understand:\n>> ^([\\W])$ | pg |\n\nI think Joel just forgot to mark that as ERROR. It certainly\ndoesn't work in our engine today (though I'm nearly done with\na patch to fix that).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Feb 2021 12:39:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "On Sun, Feb 21, 2021, at 18:39, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > This one I didn't understand:\n> >> ^([\\W])$ | pg |\n> \n> I think Joel just forgot to mark that as ERROR. \n\nYes, my mistake, sorry about that,\n(I manually edited the query result and replaced empty-field with \"ERROR\").\n\n(I see I also forgot to mark the ones in the first ASCII part\nof the email as ERROR, which should have been the\nones with an empty field for engine \"pg\".)\n\n/Joel\nOn Sun, Feb 21, 2021, at 18:39, Tom Lane wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:> This one I didn't understand:>> ^([\\W])$   | pg     |I think Joel just forgot to mark that as ERROR. Yes, my mistake, sorry about that,(I manually edited the query result and replaced empty-field with \"ERROR\").(I see I also forgot to mark the ones in the first ASCII partof the email as ERROR, which should have been theones with an empty field for engine \"pg\".)/Joel", "msg_date": "Sun, 21 Feb 2021 19:27:25 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> It looks like the interpretation of these other engines is that [\\d-a]\n>> is the set of \\d, the literal character \"-\", and the literal character\n>> \"a\". In other words, the - preceded by \\d or \\w (or any other character\n>> class, I guess?) loses its special meaning of identifying a character\n>> range.\n\n> Yeah. While I can see the attraction of being picky about this,\n> I can also see the attraction of being more compatible with other\n> engines. Should we relax this?\n\nAfter some more research I'm feeling that this would be a bad idea.\nThe POSIX spec states that such cases are unspecified, meaning that\nimplementations can do what they like. Hence Perl and JS are not\nout of line to interpret it this way. However, XQuery and therefore\nalso SQL consider that a character class after a dash means character\nset subtraction [1], which is pretty nearly the exact opposite\nsemantics. Keeping in mind that we are likely to someday want to\nprovide a closer match for XQuery, I'm thinking we're best off to\nkeep such cases as an error for now. Otherwise the risk of confusion\nwill be pretty high.\n\nAnyway, 0001 attached is the promised patch to enable \\D, \\S, \\W\nto work inside bracket expressions. I did some cleanup in the\ngeneral area as well:\n\n* Create infrastructure to allow treating \\w as a character class\nin its own right. (I did not expose [[:word:]] as a class name,\nthough it would be a little more symmetric to do so; should we?)\n\n* Split cclass() into separate functions to look up a char class\nname (producing an enum) and to produce a cvec character vector\nfrom the enum. This allows the char class escapes to use the\nenum values directly without an artificial lookup.\n\n* Remove the lexnest() hack, and in consequence clean up wordchrs()\nto not interact with the lexer.\n\n* Fix colorcomplement() to not be O(N^2) in the number of colors\ninvolved. I didn't detect any measurable speedup on Joel's corpus,\nbut it seems like a good idea anyway.\n\n* Get rid of useless-as-far-as-I-can-see calls of element()\non single-character character element names in brackpart().\nelement() always maps these to the character itself, and things\nwould be quite broken if it didn't --- should \"[a]\" match something\ndifferent than \"a\" does? Besides, the shortcut path in brackpart()\nwasn't doing this anyway, making it even more inconsistent.\n\n\n0001 preserves the current behavior of these constructs with\nrespect to newlines, namely that:\n\n\\s matches newline, with or without 'n' flag\n\\S doesn't match newline, with or without 'n' flag\n\\w doesn't match newline, with or without 'n' flag\n\\W matches newline, except with 'n' flag\n\\d doesn't match newline, with or without 'n' flag\n\\D matches newline, except with 'n' flag\n\nPerl and Javascript believe that \\W and \\D should match newlines\nregardless of their 's' flag, so there's a case for changing\n\\W and \\D to match newline regardless of our 'n' flag. 0002\nattached is the quite trivial patch to do this. I'm not quite\n100% convinced whether this is a good change to make, but if we're\ngoing to do it now would be the time.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.regular-expressions.info/charclasssubtract.html", "msg_date": "Tue, 23 Feb 2021 12:15:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 23, 2021, at 18:15, Tom Lane wrote:\n>0001 preserves the current behavior of these constructs with\n>respect to newlines, namely that:\n>\n>\\s matches newline, with or without 'n' flag\n>\\S doesn't match newline, with or without 'n' flag\n>\\w doesn't match newline, with or without 'n' flag\n>\\W matches newline, except with 'n' flag\n>\\d doesn't match newline, with or without 'n' flag\n>\\D matches newline, except with 'n' flag\n>\n>Perl and Javascript believe that \\W and \\D should match newlines\n>regardless of their 's' flag, so there's a case for changing\n>\\W and \\D to match newline regardless of our 'n' flag. 0002\n>attached is the quite trivial patch to do this. I'm not quite\n>100% convinced whether this is a good change to make, but if we're\n>going to do it now would be the time.\n>\n>Thoughts?\n\nI've tested 4.4M different regex/subject pairs\nagainst 0001 and 0001+0002 trying to find\nsome interesting examples to analyze:\n\nSELECT COUNT(*) FROM regex_tests;\n4468843\n\nOut of these, 64783 (1.4%) contained \\W\nthat could be processed by the regex engine\nand that didn't produce an error:\n\nCREATE TABLE \"\\W\" AS SELECT * FROM regex_tests WHERE processed AND error_pg IS NULL AND pattern LIKE '%\\\\W%';\nSELECT 64783\n\nOut of these, 539 gave a different result\nwhen comparing 0001 vs 0001+0002:\n\nCREATE TABLE \"\\W diff\" AS SELECT *, regexp_match(subject, '('||pattern||')', 'n') AS captured_pg_0001 FROM \"\\W\" WHERE captured_pg IS DISTINCT FROM regexp_match(subject, '('||pattern||')', 'n');\nSELECT 539\n\nOut of these, 62 didn't contain any \\W\nwhen the special [\\w\\W] construct had been filtered out.\n\nCREATE TABLE \"\\W diff ignore [\\w\\W]\" AS SELECT * FROM \"\\W diff\" WHERE regexp_replace(pattern,'\\[\\\\w\\\\W\\]','','g') LIKE '%\\\\W%';\nSELECT 62\n\nOut of these, here is a break-down showing number of distinct subjects per pattern:\n\nSELECT COUNT(*), pattern FROM \"\\W diff ignore [\\w\\W]\" GROUP BY 2 ORDER BY 1 DESC;\ncount | pattern\n-------+--------------------------------------------------\n 47 | (?:^|\\W+)@apply\\s*\\(?([^);\\n]*)\\)?\n 12 | \\W\n 1 | ((?:^|}|,|;)\\W*)((?:\\w+)?\\.(?:mc|mg|row)[\\-\\w]+)\n 1 | [\\W\\d]+\n 1 | \\W*$\n(5 rows)\n\nLet's go through each case:\n\nPattern #1: (?:^|\\W+)@apply\\s*\\(?([^);\\n]*)\\)?\n====================================\n\nThis pattern is always used with the flags \"gi\".\n\nExample subject:\n\n font-family: var(--paper-font-common-base_-_font-family); -webkit-font-smoothing: var(--paper-font-common-base_-_-webkit-font-smoothing);\n @apply --paper-font-common-nowrap;\n\nIf the author would have intended to only match non-word characters without newlines,\nthen these kind of subjects would only match by coincidence, since @apply in indented\nusing blank space, which is included in \\W.\n\nThe \\W+ in this example makes the regex match the \");\" on the line before \"@apply\", which looks very odd.\n\nMy conclusion is the author in this example wrongly think \\W+ means \"at least one white space\".\n\nI therefore it would be an improvement in this case to always include newlines in \\W.\n\nPatch 0002 therefore gets +1 due to this example.\n\nPattern 2: \\W\n============\n\nFlags used for this pattern (among all examples, not just the ones producing a diff):\n\nSELECT flags, count FROM patterns WHERE pattern = '\\W' ORDER BY 2 DESC;\nflags | count\n-------+-------\ng | 2805\n | 1476\ngi | 39\ny | 22\n(4 rows)\n\nAll subjects for this pattern had some white-space in the beginning,\nand all of them even have at least one new-line in the beginning:\n\nSELECT length((regexp_match(subject,'^(\\n*)'))[1]), COUNT(*) FROM \"\\W diff ignore [\\w\\W]\" WHERE pattern = '\\W' GROUP BY 1 ORDER BY 1;\nlength | count\n--------+-------\n 1 | 9\n 2 | 1\n 3 | 2\n(3 rows)\n\nThis, in combination with the popularity of the \"g\" flag with this pattern,\nmakes me think \\W is used to strip away leading white-space,\nincluding new-lines.\n\nPatch 0002 therefore gets +1 due to this example.\n\nPattern 3: ((?:^|}|,|;)\\W*)((?:\\w+)?\\.(?:mc|mg|row)[\\-\\w]+)\n==============================================\n\nFlags: g\n\nSubject:\n\ndiv.mgline:hover a.close-informer {\nopacity: 0.7;\n-moz-transition: all 0.3s ease-out;\n-o-transition: all 0.3s ease-out;\n-webkit-transition: all 0.3s ease-out;\n-ms-transition: all 0.3s ease-out;\ntransition: all 0.3s ease-out;\n}\n\nTo me it looks like the author wrongly thinks \\W means \"white space\".\n\nWhat makes me believe this is that \\W* is in between\n\n (?:^|}|,|;)\n\nwhich matches end of statements, and,\n\n (?:\\w+)?\\.\n\nwhich matches a HTML-tag and CSS class name, or just a CSS class name.\n\nThe only natural thing I see could exist in between those two constructs is white space.\n\nNormally this regex doesn't produce any difference for cases found,\nsince most CSS code has been minified where newlines are removed,\nbut the case above was not minified and produced a diff.\n\nPatch 0002 therefore gets +1 due to this example.\n\nPattern 4: [\\W\\d]+\n================\n\nNo flags for this pattern.\n\nThe case that caused a diff was a subject with just a single comma, followed by newline and then blank spaces.\n\nSubject in hex: 2c 0a 09 09 09 09 09 09 09\n\nThis caused 0001 to only match the comma,\nwhereas 0002 (and Javascript/Perl) matches the blank spaces as well.\n\nHere are some other subjects that don't necessarily cause a diff,\nbut that could hopefully makes us understand the intent of the regex:\n\nSELECT DISTINCT ON (regexp_match_v8) * FROM (SELECT regexp_match_v8(subject,'[\\W\\d]+'), shrink_text(subject,40) FROM subjects WHERE pattern_id = 25935) AS x;\n regexp_match_v8 | shrink_text\n------------------------------------------------------------+-------------------------------------------------------------\n{\", +| , +\n \"} |\n{\", \"} | ,\n{.} | .col-item\n{/} | /content/phonak/se/s ... 106 chars ... e.jpg, (largeretina)\n{//} | //images.images4us.c ... 53 chars ... -481919.png, (large)\n{://} | https://www.dilling. ... 55 chars ... .webp, (medium-only)\n{3} | typo3conf/ext/rlp/Re ... 23 chars ... lp-logo.png, (large)\n(7 rows)\n\nWe can see the diffing case on the first line, the one with comma and newlines+blank spaces.\nNo clue on what that one is, but looking at the rest,\nto me it looks like they are trying to match the the non-word characters in the beginning.\nThe strange thing is why \\d is included in the bracket expression.\nThis causes a different in the last example:\n\n{3} | typo3conf/ext/rlp/Re ... 23 chars ... lp-logo.png, (large)\n\nIf \\d would not have been included, the first \"/\" would be matched instead of the \"3\".\n\nI cannot draw any conclusions for this pattern on what would be advisable,\nexcept that in most cases for this pattern, it wouldn't make any difference to include\nor not include newlines in \\W.\n\nPattern 5: \\W*$\n==============\n\nNo flags for this pattern.\n\nThe subject is redacted due to being a promotional text for some cryptocurrency.\nit's just four normal English sentences, where the last one is separated from the first three\nwith two newlines in between, rewritten:\n\n\"Example sentence. Some other sentence.\n\nYet some other sentence. \"\n\nDouble-quotes added to show the trailing blank space in the last sentence.\nDue to it, the 'n' regex flag causes the dot and newline to match with the 0002 patch,\nbut only match the dot without the 0002 patch.\n\nIn Javascript/Perl, since $ only means end-of-string there (unless using the \"m\" flag),\nthey instead match the last blank space. 0002 would give the same behaviour without the \"n\" flag.\n\nMy conclusion is \\W*$ is typically wrongly used to remove trailing white-space.\n\nAlways including newlines in \\W would be an improvement here,\nsince otherwise newlines wouldn't be stripped.\n\nPatch 0002 therefore gets +1 due to this example.\n\n======END OF PATTERNS=====\n\nFinal conclusion:\n\nOut of the 5 patterns analyzed,\nI found 4 of them would benefit from including newlines in \\W.\n\nThe risk of changing this seems rather small,\nsince only 0.01% of the cases found produced\nany difference at all (539 out of 4468843),\nand out of these cases, most only contained\nthe obvious [\\w\\W] which greatly benefits,\nand the rest of the 62 cases have now been\nmanually verified to also benefit from a change.\n\nMy opinion is therefore we should change \\W to include newlines.\n\nI will hopefully be able to provide a similar analysis of \\D soon,\nbut wanted to send this in the meantime.\n\n/Joel\nHi,On Tue, Feb 23, 2021, at 18:15, Tom Lane wrote:>0001 preserves the current behavior of these constructs with>respect to newlines, namely that:>>\\s matches newline, with or without 'n' flag>\\S doesn't match newline, with or without 'n' flag>\\w doesn't match newline, with or without 'n' flag>\\W matches newline, except with 'n' flag>\\d doesn't match newline, with or without 'n' flag>\\D matches newline, except with 'n' flag>>Perl and Javascript believe that \\W and \\D should match newlines>regardless of their 's' flag, so there's a case for changing>\\W and \\D to match newline regardless of our 'n' flag.  0002>attached is the quite trivial patch to do this.  I'm not quite>100% convinced whether this is a good change to make, but if we're>going to do it now would be the time.>>Thoughts?I've tested 4.4M different regex/subject pairsagainst 0001 and 0001+0002 trying to findsome interesting examples to analyze:SELECT COUNT(*) FROM regex_tests;4468843Out of these, 64783 (1.4%) contained \\Wthat could be processed by the regex engineand that didn't produce an error:CREATE TABLE \"\\W\" AS SELECT * FROM regex_tests WHERE processed AND error_pg IS NULL AND pattern LIKE '%\\\\W%';SELECT 64783Out of these, 539 gave a different resultwhen comparing 0001 vs 0001+0002:CREATE TABLE \"\\W diff\" AS SELECT *, regexp_match(subject, '('||pattern||')', 'n') AS captured_pg_0001 FROM \"\\W\" WHERE captured_pg IS DISTINCT FROM regexp_match(subject, '('||pattern||')', 'n');SELECT 539Out of these, 62 didn't contain any \\Wwhen the special [\\w\\W] construct had been filtered out.CREATE TABLE \"\\W diff ignore [\\w\\W]\" AS SELECT * FROM \"\\W diff\" WHERE regexp_replace(pattern,'\\[\\\\w\\\\W\\]','','g') LIKE '%\\\\W%';SELECT 62Out of these, here is a break-down showing number of distinct subjects per pattern:SELECT COUNT(*), pattern FROM \"\\W diff ignore [\\w\\W]\" GROUP BY 2 ORDER BY 1 DESC;count |                     pattern-------+--------------------------------------------------    47 | (?:^|\\W+)@apply\\s*\\(?([^);\\n]*)\\)?    12 | \\W     1 | ((?:^|}|,|;)\\W*)((?:\\w+)?\\.(?:mc|mg|row)[\\-\\w]+)     1 | [\\W\\d]+     1 | \\W*$(5 rows)Let's go through each case:Pattern #1: (?:^|\\W+)@apply\\s*\\(?([^);\\n]*)\\)?====================================This pattern is always used with the flags \"gi\".Example subject:        font-family: var(--paper-font-common-base_-_font-family); -webkit-font-smoothing: var(--paper-font-common-base_-_-webkit-font-smoothing);        @apply --paper-font-common-nowrap;If the author would have intended to only match non-word characters without newlines,then these kind of subjects would only match by coincidence, since @apply in indentedusing blank space, which is included in \\W.The \\W+ in this example makes the regex match the \");\" on the line before \"@apply\", which looks very odd.My conclusion is the author in this example wrongly think \\W+ means \"at least one white space\".I therefore it would be an improvement in this case to always include newlines in \\W.Patch 0002 therefore gets +1 due to this example.Pattern 2: \\W============Flags used for this pattern (among all examples, not just the ones producing a diff):SELECT flags, count FROM patterns WHERE pattern = '\\W' ORDER BY 2 DESC;flags | count-------+-------g     |  2805       |  1476gi    |    39y     |    22(4 rows)All subjects for this pattern had some white-space in the beginning,and all of them even have at least one new-line in the beginning:SELECT length((regexp_match(subject,'^(\\n*)'))[1]), COUNT(*) FROM \"\\W diff ignore [\\w\\W]\" WHERE pattern = '\\W' GROUP BY 1 ORDER BY 1;length | count--------+-------      1 |     9      2 |     1      3 |     2(3 rows)This, in combination with the popularity of the \"g\" flag with this pattern,makes me think \\W is used to strip away leading white-space,including new-lines.Patch 0002 therefore gets +1 due to this example.Pattern 3: ((?:^|}|,|;)\\W*)((?:\\w+)?\\.(?:mc|mg|row)[\\-\\w]+)==============================================Flags: gSubject:div.mgline:hover a.close-informer {opacity: 0.7;-moz-transition: all 0.3s ease-out;-o-transition: all 0.3s ease-out;-webkit-transition: all 0.3s ease-out;-ms-transition: all 0.3s ease-out;transition: all 0.3s ease-out;}To me it looks like the author wrongly thinks \\W means \"white space\".What makes me believe this is that \\W* is in between   (?:^|}|,|;)which matches end of statements, and,   (?:\\w+)?\\.which matches a HTML-tag and CSS class name, or just a CSS class name.The only natural thing I see could exist in between those two constructs is white space.Normally this regex doesn't produce any difference for cases found,since most CSS code has been minified where newlines are removed,but the case above was not minified and produced a diff.Patch 0002 therefore gets +1 due to this example.Pattern 4: [\\W\\d]+================No flags for this pattern.The case that caused a diff was a subject with just a single comma, followed by newline and then blank spaces.Subject in hex: 2c 0a 09 09 09 09 09 09 09This caused 0001 to only match the comma,whereas 0002 (and Javascript/Perl) matches the blank spaces as well.Here are some other subjects that don't necessarily cause a diff,but that could hopefully makes us understand the intent of the regex:SELECT DISTINCT ON (regexp_match_v8) * FROM (SELECT regexp_match_v8(subject,'[\\W\\d]+'), shrink_text(subject,40) FROM subjects WHERE pattern_id = 25935) AS x;                      regexp_match_v8                       |                         shrink_text------------------------------------------------------------+-------------------------------------------------------------{\",                                                       +| ,                                                          +                                                         \"} |{\", \"}                                                     | ,{.}                                                        | .col-item{/}                                                        | /content/phonak/se/s ... 106 chars ... e.jpg, (largeretina){//}                                                       | //images.images4us.c ... 53 chars ... -481919.png, (large){://}                                                      | https://www.dilling. ... 55 chars ... .webp, (medium-only){3}                                                        | typo3conf/ext/rlp/Re ... 23 chars ... lp-logo.png, (large)(7 rows)We can see the diffing case on the first line, the one with comma and newlines+blank spaces.No clue on what that one is, but looking at the rest,to me it looks like they are trying to match the the non-word characters in the beginning.The strange thing is why \\d is included in the bracket expression.This causes a different in the last example:{3}                                                        | typo3conf/ext/rlp/Re ... 23 chars ... lp-logo.png, (large)If \\d would not have been included, the first \"/\" would be matched instead of the \"3\".I cannot draw any conclusions for this pattern on what would be advisable,except that in most cases for this pattern, it wouldn't make any difference to includeor not include newlines in \\W.Pattern 5: \\W*$==============No flags for this pattern.The subject is redacted due to being a promotional text for some cryptocurrency.it's just four normal English sentences, where the last one is separated from the first threewith two newlines in between, rewritten:\"Example sentence. Some other sentence.Yet some other sentence. \"Double-quotes added to show the trailing blank space in the last sentence.Due to it, the 'n' regex flag causes the dot and newline to match with the 0002 patch,but only match the dot without the 0002 patch.In Javascript/Perl, since $ only means end-of-string there (unless using the \"m\" flag),they instead match the last blank space. 0002 would give the same behaviour without the \"n\" flag.My conclusion is \\W*$ is typically wrongly used to remove trailing white-space.Always including newlines in \\W would be an improvement here,since otherwise newlines wouldn't be stripped.Patch 0002 therefore gets +1 due to this example.======END OF PATTERNS=====Final conclusion:Out of the 5 patterns analyzed,I found 4 of them would benefit from including newlines in \\W.The risk of changing this seems rather small,since only 0.01% of the cases found producedany difference at all (539 out of 4468843),and out of these cases, most only containedthe obvious [\\w\\W] which greatly benefits,and the rest of the 62 cases have now beenmanually verified to also benefit from a change.My opinion is therefore we should change \\W to include newlines.I will hopefully be able to provide a similar analysis of \\D soon,but wanted to send this in the meantime./Joel", "msg_date": "Wed, 24 Feb 2021 16:23:07 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "On Wed, Feb 24, 2021, at 16:23, I wrote:\n>I will hopefully be able to provide a similar analysis of \\D soon,\n>but wanted to send this in the meantime.\n\nCREATE TABLE \"\\D\" AS SELECT * FROM regex_tests WHERE processed AND error_pg IS NULL AND pattern LIKE '%\\\\D%';\nSELECT 67558\n\nCREATE TABLE \"\\D diff\" AS SELECT *, regexp_match(subject, '('||pattern||')', 'n') AS captured_pg_0001 FROM \"\\D\" WHERE captured_pg IS DISTINCT FROM regexp_match(subject, '('||pattern||')', 'n');\nSELECT 12\n\nSELECT COUNT(*), pattern FROM \"\\D diff\" GROUP BY 2 ORDER BY 1 DESC;\ncount | pattern\n-------+----------\n 11 | \\D\n 1 | [\\D|\\d]*\n(2 rows)\n\nPattern 1: \\D\n============\n\nThis pattern is used to find the first decimal separator, normally dot (.):\n\nSELECT subject FROM regex_tests WHERE pattern = '\\D' ORDER BY RANDOM() LIMIT 10;\n subject\n---------------------------\n1.11.00.24975645674952163\n1.11.30.6944442955860683\n1.12.40.38502468714280424\n3.5.10.9407443094500285\n1.12.40.34334381021879845\n2.0.20.5175496920692813\n1.8.30.09144561055484002\n3.4.10.6083619758942858\n3.5.10.15406771889459425\n2.0.00.6309370335082272\n(10 rows)\n\nWe can see how this works in almost all cases:\n\nSELECT captured_pg, captured_v8, count(*) from regex_tests where pattern = '\\D' GROUP BY 1,2 ORDER BY 3 DESC LIMIT 3;\ncaptured_pg | captured_v8 | count\n-------------+-------------+-------\n{.} | {.} | 66797\n | | 103\n{-} | {-} | 64\n(10 rows)\n\nIf we take a look at the diffs found,\nall such cases have a subjects that starts with newlines:\n\nSELECT COUNT(*), subject ~ '^\\n' AS starts_with_newline FROM \"\\D diff\" WHERE pattern = '\\D' GROUP BY 2;\ncount | starts_with_newline\n-------+---------------------\n 11 | t\n(1 row)\n\nNaturally, if newlines are not included, then something else will match instead.\n\nNow, if in these cases, ignoring the newline(s) and instead proceeding\nto match the first non-digit non-newline, maybe we wound find a dot (.)\nlike in the normal case? No, that is not the case. Instead, we will hit\nsome arbitrary blank space or tab:\n\nSELECT convert_to(captured_pg[1],'utf8') AS \"0001+0002\", convert_to(captured_pg_0001[1],'utf8') AS \"0001\", COUNT(*) FROM \"\\D diff\" WHERE pattern = '\\D' GROUP BY 1,2;\n0001+0002 | 0001 | count\n-----------+------+-------\n\\x0a | \\x09 | 3\n\\x0a | \\x20 | 7\n\\x0a | | 1\n(3 rows)\n\nThe last example where nothing at all matched, was due to the string only contained a single newline,\nwhich couldn't be matched.\n\nNone of these outliners contain any decimal-looking-digit-sequences at all,\nit's all just white space, one \"€ EUR\" text and some text that looks like\nit's coming from some web shop's title:\n\nSELECT ROW_NUMBER() OVER (), subject FROM \"\\D diff\" WHERE pattern = '\\D';\nrow_number | subject\n------------+----------------------------------------------------------------\n 1 | +\n | +\n | +\n |\n 2 | +\n |\n 3 | +\n |\n 4 | +\n |\n 5 | +\n | € EUR +\n |\n 6 | +\n |\n 7 | +\n |\n 8 | +\n |\n 9 | +\n |\n 10 | +\n | Dunjackor, duntäcken och dunkuddar | Joutsen Dunspecialist+\n | +\n | +\n | +\n | – Joutsen Sweden +\n | +\n |\n 11 | +\n |\n(11 rows)\n\nMy conclusion is all of these are nonsensical subjects when applied to the \\D regex.\n\nOut of the subjects with actual digit-sequences,\nnone of them starts with newlines,\nso including newlines in \\D wouldn't cause any effect.\n\nI see no benefit, but also no harm, in including newlines.\n\nPattern 2: [\\D|\\d]*\n===============\n\nThis looks similar to [\\w\\W], the author has probably not understood pipe (\"|\") is not needed in between bracket expression parts. The author's intention is probably to match everything in the string, like .*, but including newlines.\n\nPatch 0002 therefore gets +1 due to this example.\n\n===END OF PATTERNS===\n\nMy final conclusion is we should always include newlines in \\D.\n\n/Joel\nOn Wed, Feb 24, 2021, at 16:23, I wrote:>I will hopefully be able to provide a similar analysis of \\D soon,>but wanted to send this in the meantime.CREATE TABLE \"\\D\" AS SELECT * FROM regex_tests WHERE processed AND error_pg IS NULL AND pattern LIKE '%\\\\D%';SELECT 67558CREATE TABLE \"\\D diff\" AS SELECT *, regexp_match(subject, '('||pattern||')', 'n') AS captured_pg_0001 FROM \"\\D\" WHERE captured_pg IS DISTINCT FROM regexp_match(subject, '('||pattern||')', 'n');SELECT 12SELECT COUNT(*), pattern FROM \"\\D diff\" GROUP BY 2 ORDER BY 1 DESC;count | pattern-------+----------    11 | \\D     1 | [\\D|\\d]*(2 rows)Pattern 1: \\D============This pattern is used to find the first decimal separator, normally dot (.):SELECT subject FROM regex_tests WHERE pattern = '\\D' ORDER BY RANDOM() LIMIT 10;          subject---------------------------1.11.00.249756456749521631.11.30.69444429558606831.12.40.385024687142804243.5.10.94074430945002851.12.40.343343810218798452.0.20.51754969206928131.8.30.091445610554840023.4.10.60836197589428583.5.10.154067718894594252.0.00.6309370335082272(10 rows)We can see how this works in almost all cases:SELECT captured_pg, captured_v8, count(*) from regex_tests where pattern = '\\D' GROUP BY 1,2 ORDER BY 3 DESC LIMIT 3;captured_pg | captured_v8 | count-------------+-------------+-------{.}         | {.}         | 66797             |             |   103{-}         | {-}         |    64(10 rows)If we take a look at the diffs found,all such cases have a subjects that starts with newlines:SELECT COUNT(*), subject ~ '^\\n' AS starts_with_newline FROM \"\\D diff\" WHERE pattern = '\\D' GROUP BY 2;count | starts_with_newline-------+---------------------    11 | t(1 row)Naturally, if newlines are not included, then something else will match instead.Now, if in these cases, ignoring the newline(s) and instead proceedingto match the first non-digit non-newline, maybe we wound find a dot (.)like in the normal case? No, that is not the case. Instead, we will hitsome arbitrary blank space or tab:SELECT convert_to(captured_pg[1],'utf8') AS \"0001+0002\", convert_to(captured_pg_0001[1],'utf8') AS \"0001\", COUNT(*) FROM \"\\D diff\" WHERE pattern = '\\D' GROUP BY 1,2;0001+0002 | 0001 | count-----------+------+-------\\x0a      | \\x09 |     3\\x0a      | \\x20 |     7\\x0a      |      |     1(3 rows)The last example where nothing at all matched, was due to the string only contained a single newline,which couldn't be matched.None of these outliners contain any decimal-looking-digit-sequences at all,it's all just white space, one \"€ EUR\" text and some text that looks likeit's coming from some web shop's title:SELECT ROW_NUMBER() OVER (), subject FROM \"\\D diff\" WHERE pattern = '\\D';row_number |                            subject------------+----------------------------------------------------------------          1 |                                                               +            |                                                               +            |                                                               +            |          2 |                                                               +            |          3 |                                                               +            |          4 |                                                               +            |          5 |                                                               +            |         € EUR                                                 +            |          6 |                                                               +            |          7 |                                                               +            |          8 |                                                               +            |          9 |                                                               +            |         10 |                                                               +            |     Dunjackor, duntäcken och dunkuddar | Joutsen Dunspecialist+            |                                                               +            |                                                               +            |                                                               +            |       – Joutsen Sweden                                        +            |                                                               +            |         11 |                                                               +            |(11 rows)My conclusion is all of these are nonsensical subjects when applied to the \\D regex.Out of the subjects with actual digit-sequences,none of them starts with newlines,so including newlines in \\D wouldn't cause any effect.I see no benefit, but also no harm, in including newlines.Pattern 2: [\\D|\\d]*===============This looks similar to [\\w\\W], the author has probably not understood pipe (\"|\") is not needed in between bracket expression parts. The author's intention is probably to match everything in the string, like .*, but including newlines.Patch 0002 therefore gets +1 due to this example.===END OF PATTERNS===My final conclusion is we should always include newlines in \\D./Joel", "msg_date": "Wed, 24 Feb 2021 17:03:39 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "On 2021-Feb-23, Tom Lane wrote:\n\n> * Create infrastructure to allow treating \\w as a character class\n> in its own right. (I did not expose [[:word:]] as a class name,\n> though it would be a little more symmetric to do so; should we?)\n\nApparently [:word:] is a GNU extension (or at least a \"bash-specific\ncharacter class\"[1] but apparently Emacs also supports it?); all the\nothers are mandated by POSIX[2].\n\n[1] https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions\n[2] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html#tag_09_03_05\n\nI think it'd be fine to expose [:word:] ...\n\n\n> [1] https://www.regular-expressions.info/charclasssubtract.html\n\nI had never heard of this subtraction thing. Nightmarish and confusing\nsyntax, but useful.\n\n> + Also, the character class shorthands <literal>\\D</literal>\n> + and <literal>\\W</literal> will match a newline regardless of this mode.\n> + (Before <productname>PostgreSQL</productname> 14, they did not match\n> + newlines in newline-sensitive mode.)\n\nThis seems an acceptable change to me, but then I only work here.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 24 Feb 2021 13:47:49 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Tue, Feb 23, 2021, at 18:15, Tom Lane wrote:\n>> Perl and Javascript believe that \\W and \\D should match newlines\n>> regardless of their 's' flag, so there's a case for changing\n>> \\W and \\D to match newline regardless of our 'n' flag. 0002\n>> attached is the quite trivial patch to do this. I'm not quite\n>> 100% convinced whether this is a good change to make, but if we're\n>> going to do it now would be the time.\n\n> [ extensive analysis ]\n> My opinion is therefore we should change \\W to include newlines.\n\nWow, thanks for doing all that work! But OTOH, looking at a\ncorpus taken from Javascript practice seems like it'd inevitably\nlead to that conclusion, since that is what \\W does in Javascript.\nWhether the regex authors knew the exact rules or not (and I share\nyour suspicions that some of them didn't), if they'd done any\ntesting they'd have been led to write their code that way.\n\nStill, I am not convinced that there's much to justify our current\ndefinition either. Looking at the existing code shows that the way\n\\W and \\D work now was forced by Spencer's decision to make 'n' mode\naffect complemented character classes in general, since they're just\nmacros for complemented character classes. With this reimplementation,\nthat connection isn't there anymore, so we can change it if we like.\n\nSince (AFAICS) the main use of 'n' mode is to make our regexes work\nmore like these other products, bringing \\W and \\D into line with\nthem seems like a reasonable thing to do.\n\nI've also decided after reflection that the patch should indeed\ncreate a named \"word\" character class. That's allowed per POSIX,\nand it simplifies some aspects of the documentation, since we can\nrely on referencing the class instead of repeating ourselves.\nThe attached 0001 v2 does that; it's otherwise the same as before.\n\nSpeaking of documentation, I'm wondering more and more why we're\ncontinuing to carry along re_syntax.n. We don't expose that to\nusers in any way, and it has not been maintained nearly as faithfully\nas the SGML docs. (Looking at the git history, I think I included\nit in 7bcc6d98f because it replaced re_format.7, which had been there\nin that directory since Postgres95. But that history is immaterial\nnow that we've got proper user-facing documentation.)\n\n\t\t\tregards, tom lane\n\n#text/x-diff; name=\"0001-rework-char-class-escapes-2.patch\" [0001-rework-char-class-escapes-2.patch] /home/tgl/pgsql/0001-rework-char-class-escapes-2.patch\n#text/x-diff; name=\"0002-DW-always-match-newline.patch\" [0002-DW-always-match-newline.patch] /home/tgl/pgsql/0002-DW-always-match-newline.patch\n\n\n", "msg_date": "Wed, 24 Feb 2021 12:09:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Feb-23, Tom Lane wrote:\n>> * Create infrastructure to allow treating \\w as a character class\n>> in its own right. (I did not expose [[:word:]] as a class name,\n>> though it would be a little more symmetric to do so; should we?)\n\n> Apparently [:word:] is a GNU extension (or at least a \"bash-specific\n> character class\"[1] but apparently Emacs also supports it?); all the\n> others are mandated by POSIX[2].\n> I think it'd be fine to expose [:word:] ...\n\nYeah, I'd independently come to the same conclusion. This GNU precedent\noffers even more basis for that, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Feb 2021 12:11:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" }, { "msg_contents": "I wrote:\n> I've also decided after reflection that the patch should indeed\n> create a named \"word\" character class. That's allowed per POSIX,\n> and it simplifies some aspects of the documentation, since we can\n> rely on referencing the class instead of repeating ourselves.\n> The attached 0001 v2 does that; it's otherwise the same as before.\n\nSigh, this time with the attachments ...\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 24 Feb 2021 12:14:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bizarre behavior of \\w in a regular expression bracket construct" } ]
[ { "msg_contents": "Hi,\n\nIn testing the regex engine, I found a strange case that puzzles me.\n\nWhen a text string of a single space character is casted to a character,\nI would assume the result to be, a space character,\nbut for some reason it's the null character.\n\nTrying to produce a text with null character gives an error, like expected:\n\nSELECT chr(0);\nERROR: null character not permitted\n\nSELECT c = ascii(chr(c)::char), COUNT(*) FROM generate_series(1,255) AS c GROUP BY 1;\nf | 1\nt | 254\n\nSELECT * FROM generate_series(1,255) AS c WHERE c <> ascii(chr(c)::char);\n32\n\nIt's only character 32 that has this \"special effect\".\n\n/Joel\nHi,In testing the regex engine, I found a strange case that puzzles me.When a text string of a single space character is casted to a character,I would assume the result to be, a space character,but for some reason it's the null character.Trying to produce a text with null character gives an error, like expected:SELECT chr(0);ERROR:  null character not permittedSELECT c = ascii(chr(c)::char), COUNT(*) FROM generate_series(1,255) AS c GROUP BY 1;f        |     1t        |   254SELECT * FROM generate_series(1,255) AS c WHERE c <> ascii(chr(c)::char);32It's only character 32 that has this \"special effect\"./Joel", "msg_date": "Sun, 21 Feb 2021 06:46:56 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Mysterious ::text::char cast: ascii(chr(32)::char) = 0" }, { "msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> When a text string of a single space character is casted to a character,\n> I would assume the result to be, a space character,\n> but for some reason it's the null character.\n\nThis is because of the weird semantics of char(N). chr(32) produces\na TEXT value containing one space, which you then cast to CHAR(1),\nmaking the trailing space semantically insignificant. But the\nascii() function requires a TEXT argument, so we immediately cast\nthe string back to TEXT, and that cast is defined to strip any\ntrailing spaces. Thus, what gets delivered to ascii() is an empty\nTEXT string, causing it to return 0.\n\nIf you'd just done ascii(chr(c)), you'd have gotten c.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Feb 2021 01:10:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mysterious ::text::char cast: ascii(chr(32)::char) = 0" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing progress reporting for COPY command patches [1], a\npoint on using pgstat_progress_update_multi_param instead of\npgstat_progress_update_param wherever possible was suggested in [1].\nWe could do multiple param updates at once with a single API than\ndoing each param update separately. The advantages are - 1) reducing\nfew function calls making the code look cleaner 2) atomically updating\nmultiple parameters at once within a single backend write critical\nsection i.e. incrementing st_changecount at once in a critical\nsection instead of doing it for each param separately.\n\nAttached is a patch that replaces some subsequent multiple\nupdate_param calls with a single update_multi_param.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACXQrFM%2BDSN9xr%3D%2ByRotBufnC_xgG-FQ6VXAUZRPihZAew%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 21 Feb 2021 11:30:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Use pgstat_progress_update_multi_param instead of single param update" }, { "msg_contents": "On Sun, Feb 21, 2021 at 11:30:21AM +0530, Bharath Rupireddy wrote:\n> Attached is a patch that replaces some subsequent multiple\n> update_param calls with a single update_multi_param.\n\nLooks mostly fine to me.\n\n- if (OidIsValid(indexOid))\n- pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n- PROGRESS_CLUSTER_COMMAND_CLUSTER);\n- else\n- pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n- PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n+ pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n+ OidIsValid(indexOid) ? PROGRESS_CLUSTER_COMMAND_CLUSTER :\n+ PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\nWhat's the point of changing this one?\n--\nMichael", "msg_date": "Sun, 21 Feb 2021 19:47:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_progress_update_multi_param instead of single param\n update" }, { "msg_contents": "On Sun, Feb 21, 2021 at 4:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Feb 21, 2021 at 11:30:21AM +0530, Bharath Rupireddy wrote:\n> > Attached is a patch that replaces some subsequent multiple\n> > update_param calls with a single update_multi_param.\n>\n> Looks mostly fine to me.\n>\n> - if (OidIsValid(indexOid))\n> - pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n> - PROGRESS_CLUSTER_COMMAND_CLUSTER);\n> - else\n> - pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n> - PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n> + pgstat_progress_update_param(PROGRESS_CLUSTER_COMMAND,\n> + OidIsValid(indexOid) ? PROGRESS_CLUSTER_COMMAND_CLUSTER :\n> + PROGRESS_CLUSTER_COMMAND_VACUUM_FULL);\n> What's the point of changing this one?\n\nWhile we are at it, I wanted to use a single line statement instead of\nif else, just like we do it in do_analyze_rel as below.\n\n pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,\n inh ?\nPROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :\n PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);\n\nWe can ignore it if it doesn't seem a good way.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 21 Feb 2021 16:43:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use pgstat_progress_update_multi_param instead of single param\n update" }, { "msg_contents": "On Sun, Feb 21, 2021 at 04:43:23PM +0530, Bharath Rupireddy wrote:\n> While we are at it, I wanted to use a single line statement instead of\n> if else, just like we do it in do_analyze_rel as below.\n> \n> pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,\n> inh ?\n> PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS_INH :\n> PROGRESS_ANALYZE_PHASE_ACQUIRE_SAMPLE_ROWS);\n> \n> We can ignore it if it doesn't seem a good way.\n\nWhat's always annoying with such things is that they create conflicts\nwith back-branches. So I have removed it, and applied the rest.\nThanks!\n--\nMichael", "msg_date": "Mon, 22 Feb 2021 14:22:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_progress_update_multi_param instead of single param\n update" } ]
[ { "msg_contents": "When combining multiple grouping items, such as rollups and cubes, the\nresulting flattened grouping sets can contain duplicate items. The\nstandard provides for this by allowing GROUP BY DISTINCT to deduplicate\nthem prior to doing the actual work.\n\nFor example:\nGROUP BY ROLLUP (a,b), ROLLUP (a,c)\n\nexpands to the sets:\n(a,b,c), (a,b), (a,b), (a,c), (a), (a), (a,c), (a), ()\n\nbut:\nGROUP BY DISTINCT ROLLUP (a,b), ROLLUP (a,c)\n\nexpands to just the sets:\n(a,b,c), (a,b), (a,c), (a), ()\n\nAttached is a patch to implement this for PostgreSQL.\n-- \nVik Fearing", "msg_date": "Sun, 21 Feb 2021 13:52:24 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "GROUP BY DISTINCT" }, { "msg_contents": "> On 2021.02.21. 13:52 Vik Fearing <vik@postgresfriends.org> wrote:\n> \n> Attached is a patch to implement this for PostgreSQL.\n> []\n\nThe changed line that gets stuffed into sql_features is missing a terminal value (to fill the 'comments' column).\nThis line:\n'+T434\tGROUP BY DISTINCT\t\t\tYES'\n\n(A tab at the end will do, I suppose; that's how I fixed the patch locally)\n\nErik Rijkers\n\n\n", "msg_date": "Sun, 21 Feb 2021 15:06:03 +0100 (CET)", "msg_from": "er@xs4all.nl", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "On 2/21/21 3:06 PM, er@xs4all.nl wrote:\n>> On 2021.02.21. 13:52 Vik Fearing <vik@postgresfriends.org> wrote:\n>> \n>> Attached is a patch to implement this for PostgreSQL.\n>> []\n> \n> The changed line that gets stuffed into sql_features is missing a terminal value (to fill the 'comments' column).\n> This line:\n> '+T434\tGROUP BY DISTINCT\t\t\tYES'\n> \n> (A tab at the end will do, I suppose; that's how I fixed the patch locally)\n\nArgh. Fixed.\n\nThank you for looking at it!\n-- \nVik Fearing", "msg_date": "Sun, 21 Feb 2021 15:14:12 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nthis is a useful feature, thank you for implementing. I gather that it follows the standard, if so,\r\nthen there are definitely no objections from me.\r\n\r\nThe patch in version 2, applies cleanly and passes all the tests.\r\nIt contains documentation which seems correct to a non native speaker.\r\n\r\nAs a minor gripe, I would note the addition of list_int_cmp.\r\nThe block\r\n\r\n+ /* Sort each groupset individually */\r\n+ foreach(cell, result)\r\n+ list_sort(lfirst(cell), list_int_cmp);\r\n\r\nCan follow suit from the rest of the code, and define a static cmp_list_int_asc(), as\r\nindeed the same patch does for cmp_list_len_contents_asc.\r\nThis is indeed point of which I will not hold a too strong opinion.\r\n\r\nOverall :+1: from me.\r\n\r\nI will be bumping to 'Ready for Committer' unless objections.", "msg_date": "Tue, 02 Mar 2021 15:06:28 +0000", "msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "On 3/2/21 4:06 PM, Georgios Kokolatos wrote:\n> As a minor gripe, I would note the addition of list_int_cmp.\n> The block\n> \n> + /* Sort each groupset individually */\n> + foreach(cell, result)\n> + list_sort(lfirst(cell), list_int_cmp);\n> \n> Can follow suit from the rest of the code, and define a static cmp_list_int_asc(), as\n> indeed the same patch does for cmp_list_len_contents_asc.\n> This is indeed point of which I will not hold a too strong opinion.\n\nI did it this way because list_int_cmp is a general purpose function for\nint lists that can be reused elsewhere in the future. Whereas\ncmp_list_len_contents_asc is very specific to this case so I kept it local.\n\nI'm happy to change it around if that's what consensus wants.\n\n> Overall :+1: from me.\n\nThanks for looking at it!\n\n> I will be bumping to 'Ready for Committer' unless objections.\n\nIn that case, here is another patch that fixes a typo in the docs\nmentioned privately to me by Erik. The typo (and a gratuitous rebase)\nis the only change to what you just reviewed.\n-- \nVik Fearing", "msg_date": "Tue, 2 Mar 2021 17:51:52 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Tuesday, March 2, 2021 5:51 PM, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 3/2/21 4:06 PM, Georgios Kokolatos wrote:\n>\n> > As a minor gripe, I would note the addition of list_int_cmp.\n> > The block\n> >\n> > - /* Sort each groupset individually */\n> >\n> >\n> > - foreach(cell, result)\n> >\n> >\n> > - list_sort(lfirst(cell), list_int_cmp);\n> >\n> >\n> >\n> > Can follow suit from the rest of the code, and define a static cmp_list_int_asc(), as\n> > indeed the same patch does for cmp_list_len_contents_asc.\n> > This is indeed point of which I will not hold a too strong opinion.\n>\n> I did it this way because list_int_cmp is a general purpose function for\n> int lists that can be reused elsewhere in the future. Whereas\n> cmp_list_len_contents_asc is very specific to this case so I kept it local.\n\nOf course. I got the intention and I have noted my opinion.\n>\n> I'm happy to change it around if that's what consensus wants.\n\nAs before, I will not hold a too strong opinion.\n\n>\n> > Overall :+1: from me.\n>\n> Thanks for looking at it!\n>\n> > I will be bumping to 'Ready for Committer' unless objections.\n>\n> In that case, here is another patch that fixes a typo in the docs\n> mentioned privately to me by Erik. The typo (and a gratuitous rebase)\n> is the only change to what you just reviewed.\n\nThank you. The typo was indistiguishable to me too.\n\nMy :+1: stands for version 3 of the patch. Updating status in the\ncommitfest to reflect that.\n\n//Georgios -- https://www.vmware.com\n\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Vik Fearing\n\n\n\n\n", "msg_date": "Tue, 02 Mar 2021 19:21:42 +0000", "msg_from": "Georgios <gkokolatos@protonmail.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "Hi Vik,\n\nThe patch seems quite ready, I have just two comments.\n\n1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\ndocumentation? Now the index points just to the SELECT DISTINCT part.\n\n2) The part in gram.y that wraps/unwraps the boolean flag as an integer,\nin order to stash it in the group lists is rather ugly, IMHO. It forces\nall the places handling the list to be aware of this (there are not\nmany, but still ...). And there are no other places doing (bool) intVal\nso it's not like there's a precedent for this.\n\nI think the clean solution is to make group_clause produce a struct with\ntwo fields, and just use that. Not sure how invasive that will be\noutside gram.y, though.\n\n\nAlso, the all_or_distinct vs. distinct_or_all seems a bit error-prone. I\nwonder if we can come up with some clearer names, describing the context\nof those types.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 13 Mar 2021 00:33:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "---------- Forwarded message ---------\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nDate: Fri, Mar 12, 2021 at 11:33 PM\nSubject: Re: GROUP BY DISTINCT\nTo: Vik Fearing <vik@postgresfriends.org>, Georgios Kokolatos <\ngkokolatos@protonmail.com>, <pgsql-hackers@lists.postgresql.org>\nCc: Erik Rijkers <er@xs4all.nl>\n\n\nHi Vik,\n\nThe patch seems quite ready, I have just two comments.\n\n1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\ndocumentation? Now the index points just to the SELECT DISTINCT part.\n\n.....\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\nAfter reading the above thread in hackers, I noticed that the index does\nnot point to aggrgeate functions either and DISTINCT is not mentioned in\nthe aggregate functions page either:\nhttps://www.postgresql.org/docs/current/functions-aggregate.html\nShouldn't it be mentioned with an example of COUNT(DISTINCT ...) or\naggregate_function(DISTINCT ...) in general ?\n\nBest regards\n\nPantelis Theodosiou\n\n---------- Forwarded message ---------From: Tomas Vondra <tomas.vondra@enterprisedb.com>Date: Fri, Mar 12, 2021 at 11:33 PMSubject: Re: GROUP BY DISTINCTTo: Vik Fearing <vik@postgresfriends.org>, Georgios Kokolatos <gkokolatos@protonmail.com>, <pgsql-hackers@lists.postgresql.org>Cc: Erik Rijkers <er@xs4all.nl>Hi Vik,\n\nThe patch seems quite ready, I have just two comments.\n\n1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\ndocumentation? Now the index points just to the SELECT DISTINCT part.\n.....\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyAfter reading the above thread in hackers, I noticed that the index does not point to aggrgeate functions either and DISTINCT is not mentioned in the aggregate functions page either: https://www.postgresql.org/docs/current/functions-aggregate.htmlShouldn't it be mentioned with an example of COUNT(DISTINCT ...)  or aggregate_function(DISTINCT ...) in general ?Best regardsPantelis Theodosiou", "msg_date": "Sat, 13 Mar 2021 01:03:19 +0000", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Fwd: GROUP BY DISTINCT" }, { "msg_contents": "On 3/13/21 12:33 AM, Tomas Vondra wrote:\n> Hi Vik,\n> \n> The patch seems quite ready, I have just two comments.\n\nThanks for taking a look.\n\n> 1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\n> documentation? Now the index points just to the SELECT DISTINCT part.\n\nGood idea; I never think about the index.\n\n> 2) The part in gram.y that wraps/unwraps the boolean flag as an integer,\n> in order to stash it in the group lists is rather ugly, IMHO. It forces\n> all the places handling the list to be aware of this (there are not\n> many, but still ...). And there are no other places doing (bool) intVal\n> so it's not like there's a precedent for this.\n\nThere is kind of a precedent for it, I was copying off of TriggerEvents\nand func_alias_clause.\n\n> I think the clean solution is to make group_clause produce a struct with\n> two fields, and just use that. Not sure how invasive that will be\n> outside gram.y, though.\n\nI didn't want to create a whole new parse node for it, but Andrew Gierth\npointed me towards SelectLimit so I did it like that and I agree it is\nmuch cleaner.\n\n> Also, the all_or_distinct vs. distinct_or_all seems a bit error-prone. I\n> wonder if we can come up with some clearer names, describing the context\n> of those types.\n\nI turned this into an enum for ALL/DISTINCT/default and the caller can\nchoose what it wants to do with default. I think that's a lot cleaner,\ntoo. Maybe DISTINCT ON should be changed to fit in that? I left it\nalone for now.\n\nI also snuck in something that all of us overlooked which is outputting\nthe DISTINCT in ruleutils.c. I didn't add a test for it but that would\nhave been an unfortunate bug.\n\nNew patch attached, rebased on 15639d5e8f.\n-- \nVik Fearing", "msg_date": "Tue, 16 Mar 2021 09:21:03 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "\n\nOn 3/16/21 9:21 AM, Vik Fearing wrote:\n> On 3/13/21 12:33 AM, Tomas Vondra wrote:\n>> Hi Vik,\n>>\n>> The patch seems quite ready, I have just two comments.\n> \n> Thanks for taking a look.\n> \n>> 1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\n>> documentation? Now the index points just to the SELECT DISTINCT part.\n> \n> Good idea; I never think about the index.\n> \n>> 2) The part in gram.y that wraps/unwraps the boolean flag as an integer,\n>> in order to stash it in the group lists is rather ugly, IMHO. It forces\n>> all the places handling the list to be aware of this (there are not\n>> many, but still ...). And there are no other places doing (bool) intVal\n>> so it's not like there's a precedent for this.\n> \n> There is kind of a precedent for it, I was copying off of TriggerEvents\n> and func_alias_clause.\n> \n\nI see. I was looking for \"(bool) intVal\" but you're right TriggerEvents\ncode does something similar.\n\n>> I think the clean solution is to make group_clause produce a struct with\n>> two fields, and just use that. Not sure how invasive that will be\n>> outside gram.y, though.\n> \n> I didn't want to create a whole new parse node for it, but Andrew Gierth\n> pointed me towards SelectLimit so I did it like that and I agree it is\n> much cleaner.\n> \n\nI agree, that's much cleaner.\n\n>> Also, the all_or_distinct vs. distinct_or_all seems a bit error-prone. I\n>> wonder if we can come up with some clearer names, describing the context\n>> of those types.\n> \n> I turned this into an enum for ALL/DISTINCT/default and the caller can\n> choose what it wants to do with default. I think that's a lot cleaner,\n> too. Maybe DISTINCT ON should be changed to fit in that? I left it\n> alone for now.\n> \n\nI think DISTINCT ON is a different kind of animal, because that is a\nlist of expressions, not just a simple enum state.\n\n> I also snuck in something that all of us overlooked which is outputting\n> the DISTINCT in ruleutils.c. I didn't add a test for it but that would\n> have been an unfortunate bug.\n> \n\nOh!\n\n> New patch attached, rebased on 15639d5e8f.\n> \n\nThanks. At this point it seems fine to me, no further comments.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Mar 2021 15:52:52 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "On 3/16/21 3:52 PM, Tomas Vondra wrote:\n> \n> \n> On 3/16/21 9:21 AM, Vik Fearing wrote:\n>> On 3/13/21 12:33 AM, Tomas Vondra wrote:\n>>> Hi Vik,\n>>>\n>>> The patch seems quite ready, I have just two comments.\n>>\n>> Thanks for taking a look.\n>>\n>>> 1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\n>>> documentation? Now the index points just to the SELECT DISTINCT part.\n>>\n>> Good idea; I never think about the index.\n>>\n>>> 2) The part in gram.y that wraps/unwraps the boolean flag as an integer,\n>>> in order to stash it in the group lists is rather ugly, IMHO. It forces\n>>> all the places handling the list to be aware of this (there are not\n>>> many, but still ...). And there are no other places doing (bool) intVal\n>>> so it's not like there's a precedent for this.\n>>\n>> There is kind of a precedent for it, I was copying off of TriggerEvents\n>> and func_alias_clause.\n>>\n> \n> I see. I was looking for \"(bool) intVal\" but you're right TriggerEvents\n> code does something similar.\n> \n>>> I think the clean solution is to make group_clause produce a struct with\n>>> two fields, and just use that. Not sure how invasive that will be\n>>> outside gram.y, though.\n>>\n>> I didn't want to create a whole new parse node for it, but Andrew Gierth\n>> pointed me towards SelectLimit so I did it like that and I agree it is\n>> much cleaner.\n>>\n> \n> I agree, that's much cleaner.\n> \n>>> Also, the all_or_distinct vs. distinct_or_all seems a bit error-prone. I\n>>> wonder if we can come up with some clearer names, describing the context\n>>> of those types.\n>>\n>> I turned this into an enum for ALL/DISTINCT/default and the caller can\n>> choose what it wants to do with default. I think that's a lot cleaner,\n>> too. Maybe DISTINCT ON should be changed to fit in that? I left it\n>> alone for now.\n>>\n> \n> I think DISTINCT ON is a different kind of animal, because that is a\n> list of expressions, not just a simple enum state.\n> \n>> I also snuck in something that all of us overlooked which is outputting\n>> the DISTINCT in ruleutils.c. I didn't add a test for it but that would\n>> have been an unfortunate bug.\n>>\n> \n> Oh!\n> \n>> New patch attached, rebased on 15639d5e8f.\n>>\n> \n> Thanks. At this point it seems fine to me, no further comments.\n> \n\nPushed. Thanks for the patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Mar 2021 18:25:47 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "Hi, I didn't think of including you in this suggestion.\nOr the pdsql-docs was not the right list to post? I didn't want to mix it\nwith the GROUP BY DISTINCT patch.\n\nPlease check my suggestion.\n\nBest regards\nPantelis Theodosiou\n\n\n\n\n---------- Forwarded message ---------\nFrom: Pantelis Theodosiou <ypercube@gmail.com>\nDate: Sat, Mar 13, 2021 at 1:03 AM\nSubject: Fwd: GROUP BY DISTINCT\nTo: <pgsql-docs@lists.postgresql.org>\n\n\n\n---------- Forwarded message ---------\nFrom: Tomas Vondra <tomas.vondra@enterprisedb.com>\nDate: Fri, Mar 12, 2021 at 11:33 PM\nSubject: Re: GROUP BY DISTINCT\nTo: Vik Fearing <vik@postgresfriends.org>, Georgios Kokolatos <\ngkokolatos@protonmail.com>, <pgsql-hackers@lists.postgresql.org>\nCc: Erik Rijkers <er@xs4all.nl>\n\n\nHi Vik,\n\nThe patch seems quite ready, I have just two comments.\n\n1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\ndocumentation? Now the index points just to the SELECT DISTINCT part.\n\n.....\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\nAfter reading the above thread in hackers, I noticed that the index does\nnot point to aggrgeate functions either and DISTINCT is not mentioned in\nthe aggregate functions page either:\nhttps://www.postgresql.org/docs/current/functions-aggregate.html\nShouldn't it be mentioned with an example of COUNT(DISTINCT ...) or\naggregate_function(DISTINCT ...) in general ?\n\nBest regards\n\nPantelis Theodosiou\n\nHi, I didn't think of including you in this suggestion.Or the pdsql-docs was not the right list to post? I didn't want to mix it with the GROUP BY DISTINCT patch.Please check my suggestion.Best regardsPantelis Theodosiou---------- Forwarded message ---------From: Pantelis Theodosiou <ypercube@gmail.com>Date: Sat, Mar 13, 2021 at 1:03 AMSubject: Fwd: GROUP BY DISTINCTTo: <pgsql-docs@lists.postgresql.org>---------- Forwarded message ---------From: Tomas Vondra <tomas.vondra@enterprisedb.com>Date: Fri, Mar 12, 2021 at 11:33 PMSubject: Re: GROUP BY DISTINCTTo: Vik Fearing <vik@postgresfriends.org>, Georgios Kokolatos <gkokolatos@protonmail.com>, <pgsql-hackers@lists.postgresql.org>Cc: Erik Rijkers <er@xs4all.nl>Hi Vik,\n\nThe patch seems quite ready, I have just two comments.\n\n1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\ndocumentation? Now the index points just to the SELECT DISTINCT part.\n.....\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyAfter reading the above thread in hackers, I noticed that the index does not point to aggrgeate functions either and DISTINCT is not mentioned in the aggregate functions page either: https://www.postgresql.org/docs/current/functions-aggregate.htmlShouldn't it be mentioned with an example of COUNT(DISTINCT ...)  or aggregate_function(DISTINCT ...) in general ?Best regardsPantelis Theodosiou", "msg_date": "Thu, 18 Mar 2021 18:03:03 +0000", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "DISTINCT term in aggregate function" }, { "msg_contents": "Sorry, I'm not reading pgsql-docs very often, so I missed the post.\nYeah, we should probably add an indexterm to the other places too.\n\nregards\n\nOn 3/18/21 7:03 PM, Pantelis Theodosiou wrote:\n> Hi, I didn't think of including you in this suggestion.\n> Or the pdsql-docs was not the right list to post? I didn't want to mix\n> it with the GROUP BY DISTINCT patch.\n> \n> Please check my suggestion.\n> \n> Best regards\n> Pantelis Theodosiou\n> \n> \n> \n> \n> ---------- Forwarded message ---------\n> From: *Pantelis Theodosiou* <ypercube@gmail.com <mailto:ypercube@gmail.com>>\n> Date: Sat, Mar 13, 2021 at 1:03 AM\n> Subject: Fwd: GROUP BY DISTINCT\n> To: <pgsql-docs@lists.postgresql.org\n> <mailto:pgsql-docs@lists.postgresql.org>>\n> \n> \n> \n> ---------- Forwarded message ---------\n> From: *Tomas Vondra* <tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>>\n> Date: Fri, Mar 12, 2021 at 11:33 PM\n> Subject: Re: GROUP BY DISTINCT\n> To: Vik Fearing <vik@postgresfriends.org\n> <mailto:vik@postgresfriends.org>>, Georgios Kokolatos\n> <gkokolatos@protonmail.com <mailto:gkokolatos@protonmail.com>>,\n> <pgsql-hackers@lists.postgresql.org\n> <mailto:pgsql-hackers@lists.postgresql.org>>\n> Cc: Erik Rijkers <er@xs4all.nl <mailto:er@xs4all.nl>>\n> \n> \n> Hi Vik,\n> \n> The patch seems quite ready, I have just two comments.\n> \n> 1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\n> documentation? Now the index points just to the SELECT DISTINCT part.\n> \n> .....\n> \n> regards\n> \n> -- \n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com <http://www.enterprisedb.com>\n> The Enterprise PostgreSQL Company\n> \n> \n> \n> After reading the above thread in hackers, I noticed that the index does\n> not point to aggrgeate functions either and DISTINCT is not mentioned in\n> the aggregate functions page\n> either: https://www.postgresql.org/docs/current/functions-aggregate.html\n> <https://www.postgresql.org/docs/current/functions-aggregate.html>\n> Shouldn't it be mentioned with an example of COUNT(DISTINCT ...)  or\n> aggregate_function(DISTINCT ...) in general ?\n> \n> Best regards\n> \n> Pantelis Theodosiou\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:05:52 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: DISTINCT term in aggregate function" }, { "msg_contents": "\n\nOn 3/18/21 6:25 PM, Tomas Vondra wrote:\n> On 3/16/21 3:52 PM, Tomas Vondra wrote:\n>>\n>>\n>> On 3/16/21 9:21 AM, Vik Fearing wrote:\n>>> On 3/13/21 12:33 AM, Tomas Vondra wrote:\n>>>> Hi Vik,\n>>>>\n>>>> The patch seems quite ready, I have just two comments.\n>>>\n>>> Thanks for taking a look.\n>>>\n>>>> 1) Shouldn't this add another <indexterm> for DISTINCT, somewhere in the\n>>>> documentation? Now the index points just to the SELECT DISTINCT part.\n>>>\n>>> Good idea; I never think about the index.\n>>>\n>>>> 2) The part in gram.y that wraps/unwraps the boolean flag as an integer,\n>>>> in order to stash it in the group lists is rather ugly, IMHO. It forces\n>>>> all the places handling the list to be aware of this (there are not\n>>>> many, but still ...). And there are no other places doing (bool) intVal\n>>>> so it's not like there's a precedent for this.\n>>>\n>>> There is kind of a precedent for it, I was copying off of TriggerEvents\n>>> and func_alias_clause.\n>>>\n>>\n>> I see. I was looking for \"(bool) intVal\" but you're right TriggerEvents\n>> code does something similar.\n>>\n>>>> I think the clean solution is to make group_clause produce a struct with\n>>>> two fields, and just use that. Not sure how invasive that will be\n>>>> outside gram.y, though.\n>>>\n>>> I didn't want to create a whole new parse node for it, but Andrew Gierth\n>>> pointed me towards SelectLimit so I did it like that and I agree it is\n>>> much cleaner.\n>>>\n>>\n>> I agree, that's much cleaner.\n>>\n>>>> Also, the all_or_distinct vs. distinct_or_all seems a bit error-prone. I\n>>>> wonder if we can come up with some clearer names, describing the context\n>>>> of those types.\n>>>\n>>> I turned this into an enum for ALL/DISTINCT/default and the caller can\n>>> choose what it wants to do with default. I think that's a lot cleaner,\n>>> too. Maybe DISTINCT ON should be changed to fit in that? I left it\n>>> alone for now.\n>>>\n>>\n>> I think DISTINCT ON is a different kind of animal, because that is a\n>> list of expressions, not just a simple enum state.\n>>\n>>> I also snuck in something that all of us overlooked which is outputting\n>>> the DISTINCT in ruleutils.c. I didn't add a test for it but that would\n>>> have been an unfortunate bug.\n>>>\n>>\n>> Oh!\n>>\n>>> New patch attached, rebased on 15639d5e8f.\n>>>\n>>\n>> Thanks. At this point it seems fine to me, no further comments.\n>>\n> \n> Pushed. Thanks for the patch.\n> \n\nHmmm, this seems to fail on lapwing with this error:\n\nparse_agg.c: In function 'expand_grouping_sets':\nparse_agg.c:1851:23: error: value computed is not used\n[-Werror=unused-value]\ncc1: all warnings being treated as errors\n\nThat line is this:\n\n foreach_delete_current(result, cell);\n\nand I don't see how any of the values close by could be unused ...\n\nThe only possibility I can think of is some sort of issue in the old-ish\ngcc release (4.7.2).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Mar 2021 20:27:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "On Fri, Mar 19, 2021 at 8:27 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Hmmm, this seems to fail on lapwing with this error:\n>\n> parse_agg.c: In function 'expand_grouping_sets':\n> parse_agg.c:1851:23: error: value computed is not used\n> [-Werror=unused-value]\n> cc1: all warnings being treated as errors\n>\n> That line is this:\n>\n> foreach_delete_current(result, cell);\n>\n> and I don't see how any of the values close by could be unused ...\n>\n> The only possibility I can think of is some sort of issue in the old-ish\n> gcc release (4.7.2).\n\nNo sure what's going on there, but data points: I tried a 32 bit build\nhere (that's the other special thing about lapwing) and didn't see the\nwarning. GCC 10. Also curculio (gcc 4.2) and snapper (gcc 4.7) are\nalso showing this warning, but they don't have -Werror so they don't\nfail. sidewinder (gcc 4.8) is not showing the warning.\n\n\n", "msg_date": "Fri, 19 Mar 2021 10:02:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "\n\nOn 3/18/21 10:02 PM, Thomas Munro wrote:\n> On Fri, Mar 19, 2021 at 8:27 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Hmmm, this seems to fail on lapwing with this error:\n>>\n>> parse_agg.c: In function 'expand_grouping_sets':\n>> parse_agg.c:1851:23: error: value computed is not used\n>> [-Werror=unused-value]\n>> cc1: all warnings being treated as errors\n>>\n>> That line is this:\n>>\n>> foreach_delete_current(result, cell);\n>>\n>> and I don't see how any of the values close by could be unused ...\n>>\n>> The only possibility I can think of is some sort of issue in the old-ish\n>> gcc release (4.7.2).\n> \n> No sure what's going on there, but data points: I tried a 32 bit build\n> here (that's the other special thing about lapwing) and didn't see the\n> warning. GCC 10. Also curculio (gcc 4.2) and snapper (gcc 4.7) are\n> also showing this warning, but they don't have -Werror so they don't\n> fail. sidewinder (gcc 4.8) is not showing the warning.\n> \n\nThanks for the info. So it's likely related to older gcc releases. The\nquestion is how to tweak the code to get rid of this ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 18 Mar 2021 22:14:03 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "On Fri, Mar 19, 2021 at 10:14 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> >> The only possibility I can think of is some sort of issue in the old-ish\n> >> gcc release (4.7.2).\n> >\n> > No sure what's going on there, but data points: I tried a 32 bit build\n> > here (that's the other special thing about lapwing) and didn't see the\n> > warning. GCC 10. Also curculio (gcc 4.2) and snapper (gcc 4.7) are\n> > also showing this warning, but they don't have -Werror so they don't\n> > fail. sidewinder (gcc 4.8) is not showing the warning.\n> >\n>\n> Thanks for the info. So it's likely related to older gcc releases. The\n> question is how to tweak the code to get rid of this ...\n\nIt's frustrating to have to do press-ups to fix a problem because a\nzombie Debian 7 system is running with -Werror (though it's always\npossible that it's telling us something interesting...). Anyway, I\nthink someone with a GCC < 4.8 compiler would have to investigate. I\nwas hoping to help, but none of my systems have one in easy-to-install\nformat...\n\n\n", "msg_date": "Fri, 19 Mar 2021 10:57:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Mar 19, 2021 at 10:14 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Thanks for the info. So it's likely related to older gcc releases. The\n>> question is how to tweak the code to get rid of this ...\n\n> It's frustrating to have to do press-ups to fix a problem because a\n> zombie Debian 7 system is running with -Werror (though it's always\n> possible that it's telling us something interesting...). Anyway, I\n> think someone with a GCC < 4.8 compiler would have to investigate. I\n> was hoping to help, but none of my systems have one in easy-to-install\n> format...\n\nHmm ... prairiedog isn't showing the warning, but maybe gaur will.\nI can take a look if nobody else is stepping up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 18:35:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "I wrote:\n> Hmm ... prairiedog isn't showing the warning, but maybe gaur will.\n\nBingo:\n\nparse_agg.c: In function 'expand_grouping_sets':\nparse_agg.c:1851:5: warning: value computed is not used\n\nThis is gcc 4.5, but hopefully whatever shuts it up will also work on 4.7.\nI'll work on figuring that out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:10:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "I wrote:\n> This is gcc 4.5, but hopefully whatever shuts it up will also work on 4.7.\n> I'll work on figuring that out.\n\nActually, the problem is pretty obvious after comparing this use\nof foreach_delete_current() to every other one. I'm not sure why\nthe compiler warnings are phrased just as they are, but the fix\nI just pushed does make 4.5 happy.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:26:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "\nOn 3/19/21 12:26 AM, Tom Lane wrote:\n> I wrote:\n>> This is gcc 4.5, but hopefully whatever shuts it up will also work on 4.7.\n>> I'll work on figuring that out.\n> \n> Actually, the problem is pretty obvious after comparing this use\n> of foreach_delete_current() to every other one. I'm not sure why\n> the compiler warnings are phrased just as they are, but the fix\n> I just pushed does make 4.5 happy.\n> \n\nThanks! Yeah, that looks obvious. Funny the older compilers noticed\nthis, not the new fancy ones.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 19 Mar 2021 00:52:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: GROUP BY DISTINCT" }, { "msg_contents": "On 3/19/21 12:52 AM, Tomas Vondra wrote:\n> \n> On 3/19/21 12:26 AM, Tom Lane wrote:\n>> I wrote:\n>>> This is gcc 4.5, but hopefully whatever shuts it up will also work on 4.7.\n>>> I'll work on figuring that out.\n>>\n>> Actually, the problem is pretty obvious after comparing this use\n>> of foreach_delete_current() to every other one. I'm not sure why\n>> the compiler warnings are phrased just as they are, but the fix\n>> I just pushed does make 4.5 happy.\n>>\n> \n> Thanks! Yeah, that looks obvious. Funny the older compilers noticed\n> this, not the new fancy ones.\n\n+1\n\nI'm glad the buildfarm is so diverse.\n-- \nVik Fearing\n\n\n", "msg_date": "Fri, 19 Mar 2021 00:55:52 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: GROUP BY DISTINCT" } ]
[ { "msg_contents": "Hello,\n\nI am designing and implementing a connection pool for psycopg3 [1][2].\nSome of the inspiration is coming from HikariCP [3], a Java connection\npool.\n\nOne of the HikariCP configuration parameters is \"maxLifetime\", whose\ndescription is: \"This property controls the maximum lifetime of a\nconnection in the pool. [...] **We strongly recommend setting this\nvalue, and it should be several seconds shorter than any database or\ninfrastructure imposed connection time limit.**\" (bold is theirs,\ndefault value is 30 mins).\n\nWhen discussing the pool features in the psycopg mailing list someone\npointed out \"what is the utility of this parameter? connections don't\nrot, do they?\"\n\nHikari is a generic connection pool, not one specific for Postgres. So\nI'm wondering: is there any value in periodically deleting and\nrecreating connections for a Postgres-specific connection pool? Is a\nMaxLifetime parameter useful?\n\nThank you very much,\n\n-- Daniele\n\n[1]: https://www.psycopg.org/articles/2021/01/17/pool-design/\n[2]: https://github.com/psycopg/psycopg3/blob/connection-pool/psycopg3/psycopg3/pool.py\n[3]: https://github.com/brettwooldridge/HikariCP\n\n\n", "msg_date": "Sun, 21 Feb 2021 19:05:03 +0100", "msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>", "msg_from_op": true, "msg_subject": "Is a connection max lifetime useful in a connection pool?" }, { "msg_contents": "Hi\n\nne 21. 2. 2021 v 19:05 odesílatel Daniele Varrazzo <\ndaniele.varrazzo@gmail.com> napsal:\n\n> Hello,\n>\n> I am designing and implementing a connection pool for psycopg3 [1][2].\n> Some of the inspiration is coming from HikariCP [3], a Java connection\n> pool.\n>\n> One of the HikariCP configuration parameters is \"maxLifetime\", whose\n> description is: \"This property controls the maximum lifetime of a\n> connection in the pool. [...] **We strongly recommend setting this\n> value, and it should be several seconds shorter than any database or\n> infrastructure imposed connection time limit.**\" (bold is theirs,\n> default value is 30 mins).\n>\n> When discussing the pool features in the psycopg mailing list someone\n> pointed out \"what is the utility of this parameter? connections don't\n> rot, do they?\"\n>\n> Hikari is a generic connection pool, not one specific for Postgres. So\n> I'm wondering: is there any value in periodically deleting and\n> recreating connections for a Postgres-specific connection pool? Is a\n> MaxLifetime parameter useful?\n>\n>\nI have very strong experience - it is very useful. Long live PostgreSQL\nprocesses can have a lot of allocated memory, and with some unhappy\nconsequences, the operation system memory can be fragmented, and the\noperation system can use swap. Next issue can be bloated catalogue cache\ninside processes. Both issues depends on application design, catalog size,\nand others factors, and the most simple fix of these issues is setting a\nshort life of Postgres sessions - 1 hour is usual value.\n\nRegards\n\nPavel\n\n\n\n> Thank you very much,\n>\n> -- Daniele\n>\n> [1]: https://www.psycopg.org/articles/2021/01/17/pool-design/\n> [2]:\n> https://github.com/psycopg/psycopg3/blob/connection-pool/psycopg3/psycopg3/pool.py\n> [3]: https://github.com/brettwooldridge/HikariCP\n>\n>\n>\n\nHine 21. 2. 2021 v 19:05 odesílatel Daniele Varrazzo <daniele.varrazzo@gmail.com> napsal:Hello,\n\nI am designing and implementing a connection pool for psycopg3 [1][2].\nSome of the inspiration is coming from HikariCP [3], a Java connection\npool.\n\nOne of the HikariCP configuration parameters is \"maxLifetime\", whose\ndescription is: \"This property controls the maximum lifetime of a\nconnection in the pool. [...] **We strongly recommend setting this\nvalue, and it should be several seconds shorter than any database or\ninfrastructure imposed connection time limit.**\" (bold is theirs,\ndefault value is 30 mins).\n\nWhen discussing the pool features in the psycopg mailing list someone\npointed out \"what is the utility of this parameter? connections don't\nrot, do they?\"\n\nHikari is a generic connection pool, not one specific for Postgres. So\nI'm wondering: is there any value in periodically deleting and\nrecreating connections for a Postgres-specific connection pool? Is a\nMaxLifetime parameter useful?\nI have very strong experience - it is very useful. Long live PostgreSQL processes can have a lot of allocated memory, and with some unhappy consequences, the operation system memory can be fragmented, and the operation system can use swap. Next issue can be bloated catalogue cache inside processes. Both issues depends on application design, catalog size, and others factors, and the most simple fix of these issues is setting a short life of Postgres sessions - 1 hour is usual value.RegardsPavel \nThank you very much,\n\n-- Daniele\n\n[1]: https://www.psycopg.org/articles/2021/01/17/pool-design/\n[2]: https://github.com/psycopg/psycopg3/blob/connection-pool/psycopg3/psycopg3/pool.py\n[3]: https://github.com/brettwooldridge/HikariCP", "msg_date": "Sun, 21 Feb 2021 19:11:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is a connection max lifetime useful in a connection pool?" }, { "msg_contents": "Greetings,\n\n* Daniele Varrazzo (daniele.varrazzo@gmail.com) wrote:\n> I am designing and implementing a connection pool for psycopg3 [1][2].\n> Some of the inspiration is coming from HikariCP [3], a Java connection\n> pool.\n> \n> One of the HikariCP configuration parameters is \"maxLifetime\", whose\n> description is: \"This property controls the maximum lifetime of a\n> connection in the pool. [...] **We strongly recommend setting this\n> value, and it should be several seconds shorter than any database or\n> infrastructure imposed connection time limit.**\" (bold is theirs,\n> default value is 30 mins).\n> \n> When discussing the pool features in the psycopg mailing list someone\n> pointed out \"what is the utility of this parameter? connections don't\n> rot, do they?\"\n> \n> Hikari is a generic connection pool, not one specific for Postgres. So\n> I'm wondering: is there any value in periodically deleting and\n> recreating connections for a Postgres-specific connection pool? Is a\n> MaxLifetime parameter useful?\n\nShort answer- yes. In particular, what I read into the HikariCP's\ndocumentation is that they've had cases where, say, a firewall in the\nmiddle is configured to just rudely drop a connection after a certain\namount of time (which can take some time to detect if the firewall just\ndecides to no longer forward packets associated with that connection).\n\nThere's another PG-specific reason though: on systems with loads of\ntables / objects, each object that a given backend touches ends up in a\nper-backend cache. This is great because it helps a lot when the same\nobjects are used over and over, but when there's lots of objects getting\ntouched the per-backend memory usage can increase (and no, it doesn't\never go down; work is being done to improve on that situation but it\nhasn't been fixed so far, afaik). If backends are never let go,\neventually all the backends end up with entries cached for all the\nobjects making for a fair bit of memory being used. Naturally, the\ntrade off here is that a brand new backend won't have anything in the\ncache and therefore will have a 'slow start' when it comes to answering\nqueries (this is part of where PG's reputation for slow starting comes\nfrom actually and why connection poolers are particularly useful for\nPG).\n\nThere's other reasons too- PG (rarely, but it happens), and extensions\n(a bit more often..) can end up leaking per-backend memory meaning that\nthe backend memory usage increases without any actual benefit. Dropping\nand reconnecting can help address that (though, of course, we'd like to\nfix all such cases).\n\nThanks,\n\nStephen", "msg_date": "Sun, 21 Feb 2021 13:26:27 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Is a connection max lifetime useful in a connection pool?" }, { "msg_contents": "On Sun, 21 Feb 2021 at 19:12, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> I have very strong experience - it is very useful.\n\nOn Sun, 21 Feb 2021 at 19:26, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Short answer- yes.\n\nSounds good. Thank you very much for your insight!\n\n-- Daniele\n\n\n", "msg_date": "Sun, 21 Feb 2021 19:39:12 +0100", "msg_from": "Daniele Varrazzo <daniele.varrazzo@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Is a connection max lifetime useful in a connection pool?" }, { "msg_contents": "Hi,\n\nOn 2021-02-21 19:05:03 +0100, Daniele Varrazzo wrote:\n> One of the HikariCP configuration parameters is \"maxLifetime\", whose\n> description is: \"This property controls the maximum lifetime of a\n> connection in the pool. [...] **We strongly recommend setting this\n> value, and it should be several seconds shorter than any database or\n> infrastructure imposed connection time limit.**\" (bold is theirs,\n> default value is 30 mins).\n> \n> When discussing the pool features in the psycopg mailing list someone\n> pointed out \"what is the utility of this parameter? connections don't\n> rot, do they?\"\n> \n> Hikari is a generic connection pool, not one specific for Postgres. So\n> I'm wondering: is there any value in periodically deleting and\n> recreating connections for a Postgres-specific connection pool? Is a\n> MaxLifetime parameter useful?\n\nIt's extremely useful. If your pooler is used in a large application\nwith different \"parts\" or your application uses schema based\nmulti-tenancy or such, you can end up with the various per-connection\ncaches getting very large without providing much benefit. Unfortunately\nwe do not yet have effective \"pressure\" against that. Similarly, if you\nhave an application using prepared statements you can end up with enough\nprepared statements for that to be a memory usage issue.\n\nAdditionally, if there's ever a problem with memory leakage, be it in\ncore PG or some extension, being able to limit the harm of that can be a\nlife saver.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 15:52:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Is a connection max lifetime useful in a connection pool?" }, { "msg_contents": "On Mon, Feb 22, 2021 at 7:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-02-21 19:05:03 +0100, Daniele Varrazzo wrote:\n> > One of the HikariCP configuration parameters is \"maxLifetime\", whose\n> > description is: \"This property controls the maximum lifetime of a\n> > connection in the pool. [...] **We strongly recommend setting this\n> > value, and it should be several seconds shorter than any database or\n> > infrastructure imposed connection time limit.**\" (bold is theirs,\n> > default value is 30 mins).\n> >\n> > When discussing the pool features in the psycopg mailing list someone\n> > pointed out \"what is the utility of this parameter? connections don't\n> > rot, do they?\"\n> >\n> > Hikari is a generic connection pool, not one specific for Postgres. So\n> > I'm wondering: is there any value in periodically deleting and\n> > recreating connections for a Postgres-specific connection pool? Is a\n> > MaxLifetime parameter useful?\n>\n> It's extremely useful.\n\n+1, I multiple times had to rely on similar cleanup in other poolers.\n\n> If your pooler is used in a large application\n> with different \"parts\" or your application uses schema based\n> multi-tenancy or such, you can end up with the various per-connection\n> caches getting very large without providing much benefit. Unfortunately\n> we do not yet have effective \"pressure\" against that. Similarly, if you\n> have an application using prepared statements you can end up with enough\n> prepared statements for that to be a memory usage issue.\n\nAnd in some case negative cache entries can be a problem too.\n\n\n", "msg_date": "Mon, 22 Feb 2021 07:57:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is a connection max lifetime useful in a connection pool?" } ]
[ { "msg_contents": "Hello.\n\nI do some very regular testing on HEAD and my scripts need to know if\nthe catalog version has changed to determine if it needs to pg_restore\nor if a basebackup is okay. In order to get it, I have to do this:\n\n\n# Get the catalog version (there is no better way to do this)\ntmp=$(mktemp --directory)\n$bin/initdb --pgdata=$tmp\ncatversion=$($bin/pg_controldata $tmp | grep \"Catalog version\" \\\n | cut --delimiter=: --fields=2 | xargs)\nrm --recursive --force $tmp\n\n\nI find this less than attractive, especially since the catalog version\nis a property of the binaries and not the data directory. Attached is a\npatchset so that the above can become simply:\n\ncatversion=$($bin/pg_config --catversion)\n\nand a second patch that adds a read-only guc to get at it on the SQL\nlevel using SHOW catalog_version; or similar. I need that because I\nalso do a dump of pg_settings and I would like for it to appear there.\n\nPlease consider.\n-- \nVik Fearing", "msg_date": "Mon, 22 Feb 2021 00:15:20 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Catalog version access" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 00:15:20 +0100, Vik Fearing wrote:\n> I do some very regular testing on HEAD and my scripts need to know if\n> the catalog version has changed to determine if it needs to pg_restore\n> or if a basebackup is okay. In order to get it, I have to do this:\n> \n> \n> # Get the catalog version (there is no better way to do this)\n> tmp=$(mktemp --directory)\n> $bin/initdb --pgdata=$tmp\n> catversion=$($bin/pg_controldata $tmp | grep \"Catalog version\" \\\n> | cut --delimiter=: --fields=2 | xargs)\n> rm --recursive --force $tmp\n\nThat's a pretty heavy way to do it. If you have access to pg_config, you\ncould just do\ngrep '^#define CATALOG_VER' $(pg_config --includedir)/server/catalog/catversion.h|awk '{print $3}'\n\n\n> I find this less than attractive, especially since the catalog version\n> is a property of the binaries and not the data directory. Attached is a\n> patchset so that the above can become simply:\n> \n> catversion=$($bin/pg_config --catversion)\n\nSeems roughly reasonable. Although I wonder if we rather should make it\nsomething more generic than just catversion? E.g. a wal page magic bump\nwill also require a dump/restore or pg_upgrade, but won't be detected by\njust doing this. So perhaps we should instead add a pg_config option\nshowing all the different versions that influence on-disk compatibility?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 15:48:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On 2/22/21 12:48 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-02-22 00:15:20 +0100, Vik Fearing wrote:\n>> I do some very regular testing on HEAD and my scripts need to know if\n>> the catalog version has changed to determine if it needs to pg_restore\n>> or if a basebackup is okay. In order to get it, I have to do this:\n>>\n>>\n>> # Get the catalog version (there is no better way to do this)\n>> tmp=$(mktemp --directory)\n>> $bin/initdb --pgdata=$tmp\n>> catversion=$($bin/pg_controldata $tmp | grep \"Catalog version\" \\\n>> | cut --delimiter=: --fields=2 | xargs)\n>> rm --recursive --force $tmp\n> \n> That's a pretty heavy way to do it.\n\nThat's what I thought, too!\n\n> If you have access to pg_config, you\n> could just do\n> grep '^#define CATALOG_VER' $(pg_config --includedir)/server/catalog/catversion.h|awk '{print $3}'\n\nOh thanks. That's much better.\n\n>> I find this less than attractive, especially since the catalog version\n>> is a property of the binaries and not the data directory. Attached is a\n>> patchset so that the above can become simply:\n>>\n>> catversion=$($bin/pg_config --catversion)\n> \n> Seems roughly reasonable. Although I wonder if we rather should make it\n> something more generic than just catversion? E.g. a wal page magic bump\n> will also require a dump/restore or pg_upgrade, but won't be detected by\n> just doing this. So perhaps we should instead add a pg_config option\n> showing all the different versions that influence on-disk compatibility?\n\nDo you mean one single thing somehow lumped together, or one for each\nversion number?\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 22 Feb 2021 00:55:03 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On Sun, Feb 21, 2021, at 8:15 PM, Vik Fearing wrote:\n> and a second patch that adds a read-only guc to get at it on the SQL\n> level using SHOW catalog_version; or similar. I need that because I\n> also do a dump of pg_settings and I would like for it to appear there.\nThe catalog version number is already available in pg_control_system().\n\npostgres=# select * from pg_control_system();\n-[ RECORD 1 ]------------+-----------------------\npg_control_version | 1300\ncatalog_version_no | 202007201\nsystem_identifier | 6931867587550812316\npg_control_last_modified | 2021-02-21 20:59:06-03\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, Feb 21, 2021, at 8:15 PM, Vik Fearing wrote:and a second patch that adds a read-only guc to get at it on the SQLlevel using SHOW catalog_version; or similar.  I need that because Ialso do a dump of pg_settings and I would like for it to appear there.The catalog version number is already available in pg_control_system().postgres=# select * from pg_control_system();-[ RECORD 1 ]------------+-----------------------pg_control_version       | 1300catalog_version_no       | 202007201system_identifier        | 6931867587550812316pg_control_last_modified | 2021-02-21 20:59:06-03--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 21 Feb 2021 21:27:29 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 2/22/21 12:48 AM, Andres Freund wrote:\n>> Seems roughly reasonable. Although I wonder if we rather should make it\n>> something more generic than just catversion? E.g. a wal page magic bump\n>> will also require a dump/restore or pg_upgrade, but won't be detected by\n>> just doing this. So perhaps we should instead add a pg_config option\n>> showing all the different versions that influence on-disk compatibility?\n\n> Do you mean one single thing somehow lumped together, or one for each\n> version number?\n\nFWIW, I think asking pg_config about this is a guaranteed way of having\nversion-skew-like bugs. If we're going to bother with providing a way\nto get this info, we should make it possible to ask the running server.\n\n(That would open up some security questions: do we want to let\nunprivileged users know this info? I guess if version() is not\nprotected then this needn't be either.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Feb 2021 19:54:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "Hi,\n\nOn 2021-02-21 19:54:01 -0500, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n> > On 2/22/21 12:48 AM, Andres Freund wrote:\n> >> Seems roughly reasonable. Although I wonder if we rather should make it\n> >> something more generic than just catversion? E.g. a wal page magic bump\n> >> will also require a dump/restore or pg_upgrade, but won't be detected by\n> >> just doing this. So perhaps we should instead add a pg_config option\n> >> showing all the different versions that influence on-disk compatibility?\n> \n> > Do you mean one single thing somehow lumped together, or one for each\n> > version number?\n> \n> FWIW, I think asking pg_config about this is a guaranteed way of having\n> version-skew-like bugs.\n\nCould you elaborate a bit?\n\n\n> If we're going to bother with providing a way\n> to get this info, we should make it possible to ask the running server.\n\nIn Vik's case there is no running server to ask, IIUC. He has an\nexisting cluster running an \"old\" set of binaries, and a set of\nbinaries. He wants to know if he needs to pg_upgrade, or can start from\na basebackup. The old version he can get from pg_controldata, or the\ncatalog. But except for initdb'ing a throw-away cluster, there's no way\nto get that for the new cluster that doesn't involve grepping headers...\n\n\n> (That would open up some security questions: do we want to let\n> unprivileged users know this info? I guess if version() is not\n> protected then this needn't be either.)\n\nI don't see a reason it'd need to be protected. Furthermore, the ship\nhas sailed:\nSELECT catalog_version_no FROM pg_control_system();\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 17:34:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-02-21 19:54:01 -0500, Tom Lane wrote:\n>> FWIW, I think asking pg_config about this is a guaranteed way of having\n>> version-skew-like bugs.\n\n> Could you elaborate a bit?\n\nHow do you know that the pg_config you found has anything to do with the\nserver you're connected to?\n\n>> If we're going to bother with providing a way\n>> to get this info, we should make it possible to ask the running server.\n\n> In Vik's case there is no running server to ask, IIUC.\n\nHm. If you're about to initdb or start the server then there's more\nreason to think you can find a matching pg_config. Still, pg_config\nis not going to tell you what is actually in the data directory, so\nit's not clear to me how it helps with \"do I need to initdb?\".\n\n> He has an\n> existing cluster running an \"old\" set of binaries, and a set of\n> binaries. He wants to know if he needs to pg_upgrade, or can start from\n> a basebackup. The old version he can get from pg_controldata, or the\n> catalog. But except for initdb'ing a throw-away cluster, there's no way\n> to get that for the new cluster that doesn't involve grepping headers...\n\nFor production cases it'd be sufficient to compare pg_config --version\noutput. I suppose if you want to automate this for development versions\nyou'd wish for something more fine-grained ... but I think you can\nalready use configure's --with-extra-version to stick catversion or\nwhatever into the --version string.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Feb 2021 20:53:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "Hi,\n\nOn 2021-02-21 20:53:52 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> If we're going to bother with providing a way\n> >> to get this info, we should make it possible to ask the running server.\n> \n> > In Vik's case there is no running server to ask, IIUC.\n> \n> Hm. If you're about to initdb or start the server then there's more\n> reason to think you can find a matching pg_config. Still, pg_config\n> is not going to tell you what is actually in the data directory, so\n> it's not clear to me how it helps with \"do I need to initdb?\".\n\nImagine trying to run regular tests of HEAD, where the tests require a\nlarge database to be loaded. Re-loading the data for every [few] commits\nis prohibitively time consuming, and even just running pg_upgrade is\npainful. So you'd like to re-use a \"template\" data directory with the\ndata loaded if possible (i.e. no catversion / WAL / ... version bumps),\nand a pg_upgrade otherwise.\n\nIn such a situation it's easy to access the catalog version for the\nexisting data directory (pg_controldata, or pg_control_system()) - but\nthere's no convenient way to figure out what the catversion of the\nto-be-tested version will be. Vik's approach to figuring that out was\ninitdb'ing a throw-away data directory, using pg_controldata, and\ndiscarding that data directory - not pretty. There's no version skew\nissue here as far as I can tell, given he's initdbing freshly anyway.\n\nThe only argument I see against such an option is that arguably just\ngrepping for the information in the headers isn't too hard. But that's\nthen something everybody has to do, there's the issue of plain unix\ncommands not working on windows, etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Feb 2021 18:24:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On 2/22/21 3:24 AM, Andres Freund wrote:\n> Imagine trying to run regular tests of HEAD, where the tests require a\n> large database to be loaded. Re-loading the data for every [few] commits\n> is prohibitively time consuming, and even just running pg_upgrade is\n> painful. So you'd like to re-use a \"template\" data directory with the\n> data loaded if possible (i.e. no catversion / WAL / ... version bumps),\n> and a pg_upgrade otherwise.\n\nThis is exactly what I am doing.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 22 Feb 2021 08:00:47 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On 22.02.21 08:00, Vik Fearing wrote:\n> On 2/22/21 3:24 AM, Andres Freund wrote:\n>> Imagine trying to run regular tests of HEAD, where the tests require a\n>> large database to be loaded. Re-loading the data for every [few] commits\n>> is prohibitively time consuming, and even just running pg_upgrade is\n>> painful. So you'd like to re-use a \"template\" data directory with the\n>> data loaded if possible (i.e. no catversion / WAL / ... version bumps),\n>> and a pg_upgrade otherwise.\n> \n> This is exactly what I am doing.\n\nIf what you want to know is whether a given binary can run against a \ngiven data directory then CATALOG_VERSION_NO isn't the only thing you \nneed to check. The full truth of this is in ReadControlFile(). The \nbest way to get that answer is to start a server and see if it \ncomplains. You can even grep the log for \"It looks like you need to \ninitdb.\".\n\n\n", "msg_date": "Wed, 3 Mar 2021 18:35:08 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On 3/3/21 6:35 PM, Peter Eisentraut wrote:\n> On 22.02.21 08:00, Vik Fearing wrote:\n>> On 2/22/21 3:24 AM, Andres Freund wrote:\n>>> Imagine trying to run regular tests of HEAD, where the tests require a\n>>> large database to be loaded. Re-loading the data for every [few] commits\n>>> is prohibitively time consuming, and even just running pg_upgrade is\n>>> painful. So you'd like to re-use a \"template\" data directory with the\n>>> data loaded if possible (i.e. no catversion / WAL / ... version bumps),\n>>> and a pg_upgrade otherwise.\n>>\n>> This is exactly what I am doing.\n> \n> If what you want to know is whether a given binary can run against a\n> given data directory then CATALOG_VERSION_NO isn't the only thing you\n> need to check.  The full truth of this is in ReadControlFile().  The\n> best way to get that answer is to start a server and see if it\n> complains.  You can even grep the log for \"It looks like you need to\n> initdb.\".\n\nIn that case, what would everyone think about a `pg_ctl check` option?\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 3 Mar 2021 18:52:04 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 3/3/21 6:35 PM, Peter Eisentraut wrote:\n>> If what you want to know is whether a given binary can run against a\n>> given data directory then CATALOG_VERSION_NO isn't the only thing you\n>> need to check.  The full truth of this is in ReadControlFile().  The\n>> best way to get that answer is to start a server and see if it\n>> complains.  You can even grep the log for \"It looks like you need to\n>> initdb.\".\n\n> In that case, what would everyone think about a `pg_ctl check` option?\n\nThe trouble with Peter's recipe is that it doesn't work if there is\nalready a server instance running there (or at least I think we'll\nbitch about the existing postmaster first, maybe I'm wrong). Now,\nthat's not such a big problem for the use-case you were describing.\nBut I bet if we expose this method as an apparently-general-purpose\npg_ctl option, there'll be complaints.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Mar 2021 14:16:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On Wed, Mar 3, 2021 at 8:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Vik Fearing <vik@postgresfriends.org> writes:\n> > On 3/3/21 6:35 PM, Peter Eisentraut wrote:\n> >> If what you want to know is whether a given binary can run against a\n> >> given data directory then CATALOG_VERSION_NO isn't the only thing you\n> >> need to check. The full truth of this is in ReadControlFile(). The\n> >> best way to get that answer is to start a server and see if it\n> >> complains. You can even grep the log for \"It looks like you need to\n> >> initdb.\".\n>\n> > In that case, what would everyone think about a `pg_ctl check` option?\n>\n> The trouble with Peter's recipe is that it doesn't work if there is\n> already a server instance running there (or at least I think we'll\n> bitch about the existing postmaster first, maybe I'm wrong). Now,\n> that's not such a big problem for the use-case you were describing.\n> But I bet if we expose this method as an apparently-general-purpose\n> pg_ctl option, there'll be complaints.\n\nAnother option could be to provide a switch to the postmaster binary.\nUsing pg_config as originally suggested is risky because you might\npick up the wrong postmaster, but if you put it on the actual\npostmaster binary you certainly know which one you're on... As this is\nsomething that's primarily of interest to developers, it's also a bit\nlower weight than having a \"heavy\" solution like an entire mode for\npg_ctl.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 8 Apr 2021 11:58:49 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On 4/8/21, 2:58 AM, \"Magnus Hagander\" <magnus@hagander.net> wrote:\r\n> Another option could be to provide a switch to the postmaster binary.\r\n> Using pg_config as originally suggested is risky because you might\r\n> pick up the wrong postmaster, but if you put it on the actual\r\n> postmaster binary you certainly know which one you're on... As this is\r\n> something that's primarily of interest to developers, it's also a bit\r\n> lower weight than having a \"heavy\" solution like an entire mode for\r\n> pg_ctl.\r\n\r\nI was looking at the --check switch for the postgres binary recently\r\n[0], and this sounds like something that might fit in nicely there.\r\nIn the attached patch, --check will also check the control file if one\r\nexists.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/0545F7B3-70C0-4DE8-8C85-EAFE6113B7EE%40amazon.com", "msg_date": "Mon, 16 Aug 2021 18:12:54 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On Mon, Aug 16, 2021 at 06:12:54PM +0000, Bossart, Nathan wrote:\n> I was looking at the --check switch for the postgres binary recently\n> [0], and this sounds like something that might fit in nicely there.\n> In the attached patch, --check will also check the control file if one\n> exists.\n\nThis would not work on a running postmaster as CreateDataDirLockFile()\nis called beforehand, but we want this capability, no?\n\nAbusing a bootstrap option for this purpose does not look like a good\nidea, to be honest, especially for something only used internally now\nand undocumented, but we want to use something aimed at an external\nuse with some documentation. Using a separate switch would be more\nadapted IMO. Also, I think that we should be careful with the read of\nthe control file to avoid false positives? We can rely on an atomic\nread/write thanks to its maximum size of 512 bytes, but this looks\nlike a lot what has been done recently with postgres -C for runtime\nGUCs, that *require* a read of the control file before grabbing the\nvalues we are interested in.\n--\nMichael", "msg_date": "Mon, 24 Jan 2022 12:30:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "Thanks for taking a look!\r\n\r\nOn 1/23/22, 7:31 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Mon, Aug 16, 2021 at 06:12:54PM +0000, Bossart, Nathan wrote:\r\n>> I was looking at the --check switch for the postgres binary recently\r\n>> [0], and this sounds like something that might fit in nicely there.\r\n>> In the attached patch, --check will also check the control file if one\r\n>> exists.\r\n>\r\n> This would not work on a running postmaster as CreateDataDirLockFile()\r\n> is called beforehand, but we want this capability, no?\r\n\r\nI was not under the impression this was a requirement, based on the\r\nuse-case discussed upthread [0]. \r\n\r\n> Abusing a bootstrap option for this purpose does not look like a good\r\n> idea, to be honest, especially for something only used internally now\r\n> and undocumented, but we want to use something aimed at an external\r\n> use with some documentation. Using a separate switch would be more\r\n> adapted IMO.\r\n\r\nThis is the opposite of what Magnus proposed earlier [1]. Do we need\r\na new pg_ctl option for this? It seems like it is really only\r\nintended for use by PostgreSQL developers, but perhaps there are other\r\nuse-cases I am not thinking of. In any case, the pg_ctl option would\r\nprobably end up using --check (or another similar mode) behind the\r\nscenes.\r\n\r\n> Also, I think that we should be careful with the read of\r\n> the control file to avoid false positives? We can rely on an atomic\r\n> read/write thanks to its maximum size of 512 bytes, but this looks\r\n> like a lot what has been done recently with postgres -C for runtime\r\n> GUCs, that *require* a read of the control file before grabbing the\r\n> values we are interested in.\r\n\r\nSorry, I'm not following this one. In my proposed patch, the control\r\nfile (if one exists) is read after CreateDataDirLockFile(), just like\r\nPostmasterMain().\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/20210222022407.ecaygvx2ise6uwyz%40alap3.anarazel.de\r\n[1] https://postgr.es/m/CABUevEySovaEDci7c0DXOrV6c7JzWqYzfVwOiRUJxMao%3D9seEw%40mail.gmail.com\r\n\r\n", "msg_date": "Mon, 24 Jan 2022 20:40:08 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On Mon, Jan 24, 2022 at 08:40:08PM +0000, Bossart, Nathan wrote:\n> On 1/23/22, 7:31 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>> On Mon, Aug 16, 2021 at 06:12:54PM +0000, Bossart, Nathan wrote:\n>>> I was looking at the --check switch for the postgres binary recently\n>>> [0], and this sounds like something that might fit in nicely there.\n>>> In the attached patch, --check will also check the control file if one\n>>> exists.\n>>\n>> This would not work on a running postmaster as CreateDataDirLockFile()\n>> is called beforehand, but we want this capability, no?\n> \n> I was not under the impression this was a requirement, based on the\n> use-case discussed upthread [0]. \n\nHmm. I got a different impression as of this one:\nhttps://www.postgresql.org/message-id/3496407.1613955241@sss.pgh.pa.us\nBut I can see downthread that this is not the case. Sorry for the\nconfusion.\n\n>> Abusing a bootstrap option for this purpose does not look like a good\n>> idea, to be honest, especially for something only used internally now\n>> and undocumented, but we want to use something aimed at an external\n>> use with some documentation. Using a separate switch would be more\n>> adapted IMO.\n> \n> This is the opposite of what Magnus proposed earlier [1]. Do we need\n> a new pg_ctl option for this? It seems like it is really only\n> intended for use by PostgreSQL developers, but perhaps there are other\n> use-cases I am not thinking of. In any case, the pg_ctl option would\n> probably end up using --check (or another similar mode) behind the\n> scenes.\n\nBased on the latest state of the thread, I am understanding that we\ndon't want a new option for pg_ctl for this feature, and using a\nbootstrap's --check for this purpose is not a good idea IMO. What I\nguess from Magnus' suggestion would be to add a completely different\nswitch. \n\nOnce you remove the requirement of a running server, we have basically\nwhat has been recently implemented with postgres -C for\nruntime-computed GUCs, because we already go through a read of the\ncontrol file to be able to print those GUCs with their correct\nvalues. This also means that it is already possible to check if a\ndata folder is compatible with a set of binaries with this facility,\nas any postgres -C command with a runtime GUC would trigger this\ncheck. Using any of the existing runtime GUCs may be confusing, but\nthat would work. And I am not really convinced that we have any need\nto add a specific GUC for this purpose, be it a sort of\nis_controlfile_valid or controlfile_checksum (CRC32 of the control\nfile).\n\n>> Also, I think that we should be careful with the read of\n>> the control file to avoid false positives? We can rely on an atomic\n>> read/write thanks to its maximum size of 512 bytes, but this looks\n>> like a lot what has been done recently with postgres -C for runtime\n>> GUCs, that *require* a read of the control file before grabbing the\n>> values we are interested in.\n> \n> Sorry, I'm not following this one. In my proposed patch, the control\n> file (if one exists) is read after CreateDataDirLockFile(), just like\n> PostmasterMain().\n\nWhile looking at the patch, I was thinking about the fact that we may\nwant to support the case even if a server is running. If we don't, my\nworries about the concurrent control file activities are moot. \n--\nMichael", "msg_date": "Tue, 25 Jan 2022 13:12:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On Tue, Jan 25, 2022 at 01:12:32PM +0900, Michael Paquier wrote:\n> Once you remove the requirement of a running server, we have basically\n> what has been recently implemented with postgres -C for\n> runtime-computed GUCs, because we already go through a read of the\n> control file to be able to print those GUCs with their correct\n> values. This also means that it is already possible to check if a\n> data folder is compatible with a set of binaries with this facility,\n> as any postgres -C command with a runtime GUC would trigger this\n> check. Using any of the existing runtime GUCs may be confusing, but\n> that would work. And I am not really convinced that we have any need\n> to add a specific GUC for this purpose, be it a sort of\n> is_controlfile_valid or controlfile_checksum (CRC32 of the control\n> file).\n\nThinking more about this one, we can already do that, so I have\nmarked the patch as RwF. Perhaps we could just add a GUC, but that\nfeels a bit dummy.\n--\nMichael", "msg_date": "Mon, 31 Jan 2022 16:57:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" }, { "msg_contents": "On Mon, Jan 31, 2022 at 04:57:13PM +0900, Michael Paquier wrote:\n> On Tue, Jan 25, 2022 at 01:12:32PM +0900, Michael Paquier wrote:\n>> Once you remove the requirement of a running server, we have basically\n>> what has been recently implemented with postgres -C for\n>> runtime-computed GUCs, because we already go through a read of the\n>> control file to be able to print those GUCs with their correct\n>> values. This also means that it is already possible to check if a\n>> data folder is compatible with a set of binaries with this facility,\n>> as any postgres -C command with a runtime GUC would trigger this\n>> check. Using any of the existing runtime GUCs may be confusing, but\n>> that would work. And I am not really convinced that we have any need\n>> to add a specific GUC for this purpose, be it a sort of\n>> is_controlfile_valid or controlfile_checksum (CRC32 of the control\n>> file).\n> \n> Thinking more about this one, we can already do that, so I have\n> marked the patch as RwF. Perhaps we could just add a GUC, but that\n> feels a bit dummy.\n\nSorry, I missed this thread earlier. You're right, we can just do\nsomething like the following to achieve basically the same result:\n\n\tpostgres -D . -C data_checksums\n\nUnless Vik has any objections, this can probably be marked as Withdrawn.\nPerhaps we can look into providing a new option for \"postgres\" at some\npoint in the future, but I don't sense a ton of demand at the moment.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 31 Jan 2022 10:10:52 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Catalog version access" } ]
[ { "msg_contents": "Hi.\n\nI created a patch which improves psql's TRUNCATE tab completion.\nCurrent tab completion can complement only a table name to be truncated.\nThis patch enables psql to complement other keywords related to \nTRUNCATE.\n\nRegards.\nKota Miyake", "msg_date": "Mon, 22 Feb 2021 16:04:10 +0900", "msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "[PATCH] Feature improvement for TRUNCATE tab completion." }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nOther than \"Hunk #1 succeeded at 3832 (offset 33 lines).\" message while applying the patch to\r\ncurrent master branch (commit 6a03369a71d4a7dc5b8d928aab775ddd28b72494) I found no issue with the patch.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 22 Feb 2021 16:44:03 +0000", "msg_from": "Muhammad Usama <m.usama@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Feature improvement for TRUNCATE tab completion." }, { "msg_contents": "\n\nOn 2021/02/23 1:44, Muhammad Usama wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n> \n> Other than \"Hunk #1 succeeded at 3832 (offset 33 lines).\" message while applying the patch to\n> current master branch (commit 6a03369a71d4a7dc5b8d928aab775ddd28b72494) I found no issue with the patch.\n> \n> The new status of this patch is: Ready for Committer\n\nOK, so barring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Feb 2021 12:59:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Feature improvement for TRUNCATE tab completion." }, { "msg_contents": "\n\nOn 2021/02/24 12:59, Fujii Masao wrote:\n> \n> \n> On 2021/02/23 1:44, Muhammad Usama wrote:\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world:  not tested\n>> Implements feature:       tested, passed\n>> Spec compliant:           tested, passed\n>> Documentation:            not tested\n>>\n>> Other than \"Hunk #1 succeeded at 3832 (offset 33 lines).\" message while applying the patch to\n>> current master branch (commit 6a03369a71d4a7dc5b8d928aab775ddd28b72494) I found no issue with the patch.\n>>\n>> The new status of this patch is: Ready for Committer\n> \n> OK, so barring any objection, I will commit this patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Feb 2021 18:22:44 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Feature improvement for TRUNCATE tab completion." } ]
[ { "msg_contents": "Hi all,\n\nI am quite new to PostgreSQL so forgive me if my understanding of the code\nbelow is wrong and please clarify what I have misunderstood.\n\nI started to experiment with the table access method interface to see if it\ncan be used for some ideas I have.\n\nFor the experiment, I am using a simple in-memory table access method that\nstores all the data as shared memory pages instead of disk pages. I know\nthere are other ways to get good performance, but this implementation is\ngood enough for my experiments since it tests a few things with the Table\nAM interface that I am wondering about.\n\nNow, the first question I was looking at is if it is possible to\nhandle DDL properly if you have a non-normal storage. Both create new\nstorage blocks on table creation, clean up on dropping a table as well as\nhandling schema changes on alter table?\n\nCreating new blocks for a table is straightforward to implement by using\nthe `relation_set_new_filenode` callback where you can create new memory\nblocks for a relation, but I cannot find a way to clean up those blocks\nwhen the table is dropped nor a way to handle a change of the schema for a\ntable.\n\nThe `relation_set_new_filenode` is indirectly called from\n`heap_create_with_catalog`, but there is no corresponding callback from\n`heap_drop_with_catalog`. It also seems like the intention is that the\ncallback should call `RelationCreateStorage` itself (makes sense, since the\naccess method knows about how to use the storage), so it seems natural to\nadd a `relation_reset_filenode` to the table AM that is called from\n`heap_drop_with_catalog` for tables and add that to the heap implementation\n(see the attached patch).\n\nAltering the schema does not seem to be covered at all, but this is\nsomething that table access methods need to know about since it might want\nto optimize the internal storage when the schema changes. I have not been\nable to find any discussions around this, but it seems like a natural thing\nto do with a table. Have I misunderstood how this works?\n\nBest wishes,\nMats Kindahl\nTimescale", "msg_date": "Mon, 22 Feb 2021 08:33:21 +0100", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "Table AM and DDLs" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 08:33:21 +0100, Mats Kindahl wrote:\n> I started to experiment with the table access method interface to see if it\n> can be used for some ideas I have.\n\nCool.\n\n\n> The `relation_set_new_filenode` is indirectly called from\n> `heap_create_with_catalog`, but there is no corresponding callback from\n> `heap_drop_with_catalog`. It also seems like the intention is that the\n> callback should call `RelationCreateStorage` itself (makes sense, since the\n> access method knows about how to use the storage), so it seems natural to\n> add a `relation_reset_filenode` to the table AM that is called from\n> `heap_drop_with_catalog` for tables and add that to the heap implementation\n> (see the attached patch).\n\nI don't think that's quite right. It's not exactly obvious from the\nname, but RelationDropStorage() does not actually drop storage. Instead\nit *schedules* the storage to be dropped upon commit.\n\nThe reason for deferring the dropping of table storage is that DDL in\npostgres is transactional. Therefore we cannot remove the storage at the\nmoment the DROP TABLE is executed - only when the transaction that\nperformed the DDL commits. Therefore just providing you with a callback\nthat runs in heap_drop_with_catalog() doesn't really achieve much -\nyou'd not have a way to execute the \"actual\" dropping of the relation at\nthe later stage.\n\n\n> Creating new blocks for a table is straightforward to implement by using\n> the `relation_set_new_filenode` callback where you can create new memory\n> blocks for a relation, but I cannot find a way to clean up those blocks\n> when the table is dropped nor a way to handle a change of the schema for a\n> table.\n\nWhat precisely do you mean with the \"handle a change of the schema\" bit?\nI.e. what would you like to do, and what do you think is preventing you\nfrom it? But before you answer see my next point below.\n\n\n> Altering the schema does not seem to be covered at all, but this is\n> something that table access methods need to know about since it might want\n> to optimize the internal storage when the schema changes. I have not been\n> able to find any discussions around this, but it seems like a natural thing\n> to do with a table. Have I misunderstood how this works?\n\nDue to postgres' transactional DDL you cannot really change the storage\nlayout of *existing data* when that DDL command is executed - the data\nstill needs to be interpretable in case the DDL is rolled back\n(including when crashing).\n\nBefore I explain some more: Could you describe in a bit more detail what\nkind of optimization you'd like to make?\n\nBack to schema change handling:\n\nFor some schema changes postgres assumes that they can be done\n\"in-place\", e.g. adding a column to a table.\n\nOther changes, e.g. changing the type of a column \"sufficiently\", will\ncause a so called table rewrite. Which means that a new relation will be\ncreated (including a call to relation_set_new_filenode()), then that new\nrelation will get all the new data inserted, and then\npg_class->relfilenode for the \"original\" relation will be changed to the\n\"rewritten\" table (there's two variants of this, once for rewrites due\nto ALTER TABLE and a separate one for VACUUM FULL/CLUSTER).\n\nWhen the transaction containing such a rewrite commits that\n->relfilenode change becomes visible for everyone, and the old\nrelfilenode will be deleted.\n\n\nThis means that right now there's no easy way to store the data anywhere\nbut in the file referenced by pg_class.relfilenode. I don't think\nanybody would object on principle to making the necessary infrastructure\nchanges to support storing data elsewhere - but I think it'll also not\nquite as simple as the change you suggested :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Feb 2021 17:11:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Table AM and DDLs" }, { "msg_contents": "On Tue, Feb 23, 2021 at 2:11 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n\nHi Andres,\n\nThanks for the answer and sorry about the late reply.\n\n\n> On 2021-02-22 08:33:21 +0100, Mats Kindahl wrote:\n> > I started to experiment with the table access method interface to see if\n> it\n> > can be used for some ideas I have.\n>\n> Cool.\n>\n>\n> > The `relation_set_new_filenode` is indirectly called from\n> > `heap_create_with_catalog`, but there is no corresponding callback from\n> > `heap_drop_with_catalog`. It also seems like the intention is that the\n> > callback should call `RelationCreateStorage` itself (makes sense, since\n> the\n> > access method knows about how to use the storage), so it seems natural to\n> > add a `relation_reset_filenode` to the table AM that is called from\n> > `heap_drop_with_catalog` for tables and add that to the heap\n> implementation\n> > (see the attached patch).\n>\n> I don't think that's quite right. It's not exactly obvious from the\n> name, but RelationDropStorage() does not actually drop storage. Instead\n> it *schedules* the storage to be dropped upon commit.\n>\n> The reason for deferring the dropping of table storage is that DDL in\n> postgres is transactional. Therefore we cannot remove the storage at the\n> moment the DROP TABLE is executed - only when the transaction that\n> performed the DDL commits. Therefore just providing you with a callback\n> that runs in heap_drop_with_catalog() doesn't really achieve much -\n> you'd not have a way to execute the \"actual\" dropping of the relation at\n> the later stage.\n>\n\nYeah, I found the chain (performDeletion -> deleteOneObject -> doDeletion\n-> heap_drop_with_catalog) where the delete was just scheduled for deletion\nbut it appeared like this was the place to actually perform the \"actual\"\ndelete. Looking closer, I see this was the wrong location. However, the\nintention was to get a callback when the \"actual\" delete should happen.\nBefore that, the blocks are still potentially alive and could be read, so\nshouldn't be recycled.\n\nIt seems the right location seems to be in the storage manager (smgr_unlink\nin smgr.c), but that does not seem to be extensible, or are there any plans\nto make it available so that you can implement something other than just\n\"magnetic disk\"?\n\n\n> > Creating new blocks for a table is straightforward to implement by using\n> > the `relation_set_new_filenode` callback where you can create new memory\n> > blocks for a relation, but I cannot find a way to clean up those blocks\n> > when the table is dropped nor a way to handle a change of the schema for\n> a\n> > table.\n>\n> What precisely do you mean with the \"handle a change of the schema\" bit?\n> I.e. what would you like to do, and what do you think is preventing you\n> from it? But before you answer see my next point below.\n>\n>\n> > Altering the schema does not seem to be covered at all, but this is\n> > something that table access methods need to know about since it might\n> want\n> > to optimize the internal storage when the schema changes. I have not been\n> > able to find any discussions around this, but it seems like a natural\n> thing\n> > to do with a table. Have I misunderstood how this works?\n>\n> Due to postgres' transactional DDL you cannot really change the storage\n> layout of *existing data* when that DDL command is executed - the data\n> still needs to be interpretable in case the DDL is rolled back\n> (including when crashing).\n>\n\nNo, didn't expect this, but some means to see that a schema change is about\nto happen.\n\n\n> Before I explain some more: Could you describe in a bit more detail what\n> kind of optimization you'd like to make?\n>\n\nThis is not really about any optimizations, it more about a good API for\ntables and managing storage. If a memory table can be implemented entirely\nin the extension and storage managed fully, there is a lot of interesting\npotential for various implementations of table backends. For this to work I\nthink it is necessary to be able to handle schema changes for the backend\nstorage in addition to scans, inserts, updates, and deletes, but I am not\nsure if it is already possible in some way that I haven't discovered or if\nI should just try to propose something (making the storage manager API\nextensible seems like a good first attempt).\n\n\n> Back to schema change handling:\n>\n> For some schema changes postgres assumes that they can be done\n> \"in-place\", e.g. adding a column to a table.\n>\n> Other changes, e.g. changing the type of a column \"sufficiently\", will\n> cause a so called table rewrite. Which means that a new relation will be\n> created (including a call to relation_set_new_filenode()), then that new\n> relation will get all the new data inserted, and then\n> pg_class->relfilenode for the \"original\" relation will be changed to the\n> \"rewritten\" table (there's two variants of this, once for rewrites due\n> to ALTER TABLE and a separate one for VACUUM FULL/CLUSTER).\n>\n\nBut that is not visible in the access method interface. If I add debug\noutput to the memory table, I only see a call to needs_toast_table. If\nthere were a new call to create a new block and some additional information\nabout , this would be possible to handle.\n\nI *was* expecting either a call of set_filenode with a new xact id, or\nsomething like that, and with some information so that you can locate the\nschema change planned (e.g., digging through pg_class and friends), I just\ndon't see that when I add debug output.\n\n\n> When the transaction containing such a rewrite commits that\n> ->relfilenode change becomes visible for everyone, and the old\n> relfilenode will be deleted.\n>\n>\n> This means that right now there's no easy way to store the data anywhere\n> but in the file referenced by pg_class.relfilenode. I don't think\n> anybody would object on principle to making the necessary infrastructure\n> changes to support storing data elsewhere - but I think it'll also not\n> quite as simple as the change you suggested :(.\n>\n\nSeems it is not. It is fine to store table information in pg_class, but to\nimplement \"interesting\" backends there need to be a way to handle schema\nchanges (among other things), but I do not see how, or I have misunderstood\nhow this is expected to work.\n\nI have a lot more questions about the table access API, but this is the\nfirst thing.\n\nBest wishes,\nMats Kindahl\n\n\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Tue, Feb 23, 2021 at 2:11 AM Andres Freund <andres@anarazel.de> wrote:Hi,Hi Andres,Thanks for the answer and sorry about the late reply.\n\nOn 2021-02-22 08:33:21 +0100, Mats Kindahl wrote:\n> I started to experiment with the table access method interface to see if it\n> can be used for some ideas I have.\n\nCool.\n\n\n> The `relation_set_new_filenode` is indirectly called from\n> `heap_create_with_catalog`, but there is no corresponding callback from\n> `heap_drop_with_catalog`. It also seems like the intention is that the\n> callback should call `RelationCreateStorage` itself (makes sense, since the\n> access method knows about how to use the storage), so it seems natural to\n> add a `relation_reset_filenode` to the table AM that is called from\n> `heap_drop_with_catalog` for tables and add that to the heap implementation\n> (see the attached patch).\n\nI don't think that's quite right. It's not exactly obvious from the\nname, but RelationDropStorage() does not actually drop storage. Instead\nit *schedules* the storage to be dropped upon commit.\n\nThe reason for deferring the dropping of table storage is that DDL in\npostgres is transactional. Therefore we cannot remove the storage at the\nmoment the DROP TABLE is executed - only when the transaction that\nperformed the DDL commits. Therefore just providing you with a callback\nthat runs in heap_drop_with_catalog() doesn't really achieve much -\nyou'd not have a way to execute the \"actual\" dropping of the relation at\nthe later stage.Yeah, I found the chain (performDeletion -> deleteOneObject -> doDeletion -> heap_drop_with_catalog) where the delete was just scheduled for deletion but it appeared like this was the place to actually perform the \"actual\" delete. Looking closer, I see this was the wrong location. However, the intention was to get a callback when the \"actual\" delete should happen. Before that, the blocks are still potentially alive and could be read, so shouldn't be recycled.It seems the right location seems to be in the storage manager (smgr_unlink in smgr.c), but that does not seem to be extensible, or are there any plans to make it available so that you can implement something other than just \"magnetic disk\"? \n\n> Creating new blocks for a table is straightforward to implement by using\n> the `relation_set_new_filenode` callback where you can create new memory\n> blocks for a relation, but I cannot find a way to clean up those blocks\n> when the table is dropped nor a way to handle a change of the schema for a\n> table.\n\nWhat precisely do you mean with the \"handle a change of the schema\" bit?\nI.e. what would you like to do, and what do you think is preventing you\nfrom it? But before you answer see my next point below.\n\n\n> Altering the schema does not seem to be covered at all, but this is\n> something that table access methods need to know about since it might want\n> to optimize the internal storage when the schema changes. I have not been\n> able to find any discussions around this, but it seems like a natural thing\n> to do with a table. Have I misunderstood how this works?\n\nDue to postgres' transactional DDL you cannot really change the storage\nlayout of *existing data* when that DDL command is executed - the data\nstill needs to be interpretable in case the DDL is rolled back\n(including when crashing).No, didn't expect this, but some means to see that a schema change is about to happen. \n\nBefore I explain some more: Could you describe in a bit more detail what\nkind of optimization you'd like to make?This is not really about any optimizations, it more about a good API for tables and managing storage. If a memory table can be implemented entirely in the extension and storage managed fully, there is a lot of interesting potential for various implementations of table backends. For this to work I think it is necessary to be able to handle schema changes for the backend storage in addition to scans, inserts, updates, and deletes, but I am not sure if it is already possible in some way that I haven't discovered or if I should just try to propose something (making the storage manager API extensible seems like a good first attempt). \n\nBack to schema change handling:\n\nFor some schema changes postgres assumes that they can be done\n\"in-place\", e.g. adding a column to a table.\n\nOther changes, e.g. changing the type of a column \"sufficiently\", will\ncause a so called table rewrite. Which means that a new relation will be\ncreated (including a call to relation_set_new_filenode()), then that new\nrelation will get all the new data inserted, and then\npg_class->relfilenode for the \"original\" relation will be changed to the\n\"rewritten\" table (there's two variants of this, once for rewrites due\nto ALTER TABLE and a separate one for VACUUM FULL/CLUSTER).But that is not visible in the access method interface. If I add debug output to the memory table, I only see a call to needs_toast_table. If there were a new call to create a new block and some additional information about , this would be possible to handle.I *was* expecting either a call of set_filenode with a new xact id, or something like that, and with some information so that you can locate the schema change planned (e.g., digging through pg_class and friends), I just don't see that when I add debug output.\n\nWhen the transaction containing such a rewrite commits that\n->relfilenode change becomes visible for everyone, and the old\nrelfilenode will be deleted.\n\n\nThis means that right now there's no easy way to store the data anywhere\nbut in the file referenced by pg_class.relfilenode. I don't think\nanybody would object on principle to making the necessary infrastructure\nchanges to support storing data elsewhere - but I think it'll also not\nquite as simple as the change you suggested :(.Seems it is not. It is fine to store table information in pg_class, but to implement \"interesting\" backends there need to be a way to handle schema changes (among other things), but I do not see how, or I have misunderstood how this is expected to work. I have a lot more questions about the table access API, but this is the first thing.Best wishes,Mats Kindahl\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 3 Mar 2021 22:15:18 +0100", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Table AM and DDLs" }, { "msg_contents": "Hi,\n\nOn 2021-03-03 22:15:18 +0100, Mats Kindahl wrote:\n> On Tue, Feb 23, 2021 at 2:11 AM Andres Freund <andres@anarazel.de> wrote:\n> Thanks for the answer and sorry about the late reply.\n\nMine is even later ;)\n\n\n> > I don't think that's quite right. It's not exactly obvious from the\n> > name, but RelationDropStorage() does not actually drop storage. Instead\n> > it *schedules* the storage to be dropped upon commit.\n> >\n> > The reason for deferring the dropping of table storage is that DDL in\n> > postgres is transactional. Therefore we cannot remove the storage at the\n> > moment the DROP TABLE is executed - only when the transaction that\n> > performed the DDL commits. Therefore just providing you with a callback\n> > that runs in heap_drop_with_catalog() doesn't really achieve much -\n> > you'd not have a way to execute the \"actual\" dropping of the relation at\n> > the later stage.\n> >\n> \n> Yeah, I found the chain (performDeletion -> deleteOneObject -> doDeletion\n> -> heap_drop_with_catalog) where the delete was just scheduled for deletion\n> but it appeared like this was the place to actually perform the \"actual\"\n> delete. Looking closer, I see this was the wrong location. However, the\n> intention was to get a callback when the \"actual\" delete should happen.\n> Before that, the blocks are still potentially alive and could be read, so\n> shouldn't be recycled.\n> \n> It seems the right location seems to be in the storage manager (smgr_unlink\n> in smgr.c), but that does not seem to be extensible, or are there any plans\n> to make it available so that you can implement something other than just\n> \"magnetic disk\"?\n\nThere've been patches to add new types of storage below smgr.c, but not\nin way that can be done outside of build time. As far as I recall.\n\n\n> > Before I explain some more: Could you describe in a bit more detail what\n> > kind of optimization you'd like to make?\n> >\n> \n> This is not really about any optimizations, it more about a good API for\n> tables and managing storage. If a memory table can be implemented entirely\n> in the extension and storage managed fully, there is a lot of interesting\n> potential for various implementations of table backends. For this to work I\n> think it is necessary to be able to handle schema changes for the backend\n> storage in addition to scans, inserts, updates, and deletes, but I am not\n> sure if it is already possible in some way that I haven't discovered or if\n> I should just try to propose something (making the storage manager API\n> extensible seems like a good first attempt).\n\nAs long as you have a compatible definition of what is acceptable \"in\nplace\" ALTER TABLE (e.g. adding new columns, changing between compatible\ntypes), and what requires a table rewrite (e.g. an incompatible column\ntype change), I don't see a real problem. Except for the unlink thing\nabove.\n\nAny schema change requiring a table rewrite will trigger a new relation\nto be created, which in turn will involve tableam. After that you'll\njust get called back to re-insert all the tuples in the original\nrelation.\n\nIf you want a different definition on what needs a rewrite, good luck,\nit'll be a heck of a lot more work.\n\n\n\n> > Due to postgres' transactional DDL you cannot really change the storage\n> > layout of *existing data* when that DDL command is executed - the data\n> > still needs to be interpretable in case the DDL is rolled back\n> > (including when crashing).\n> >\n> \n> No, didn't expect this, but some means to see that a schema change is about\n> to happen.\n\nFor anything that's not in-place you'll see a new table being created\n(c.f. ATRewriteTables() calling make_new_heap()). The relfilenode\nidentifying the data (as opposed to the oid, identifying a relation),\nwill then be swapped with the current table's relfilenode via\nfinish_heap_swap().\n\n\n> > Other changes, e.g. changing the type of a column \"sufficiently\", will\n> > cause a so called table rewrite. Which means that a new relation will be\n> > created (including a call to relation_set_new_filenode()), then that new\n> > relation will get all the new data inserted, and then\n> > pg_class->relfilenode for the \"original\" relation will be changed to the\n> > \"rewritten\" table (there's two variants of this, once for rewrites due\n> > to ALTER TABLE and a separate one for VACUUM FULL/CLUSTER).\n\n> But that is not visible in the access method interface. If I add debug\n> output to the memory table, I only see a call to needs_toast_table. If\n> there were a new call to create a new block and some additional information\n> about , this would be possible to handle.\n\nIt should be. If I e.g. do\n\nCREATE TABLE blarg(id int4 not null);\nI get one call to table_relation_set_new_filenode()\n\n#0 table_relation_set_new_filenode (rel=0x7f84c2417b70, newrnode=0x7f84c2417b70, persistence=112 'p', freezeXid=0x7ffc8c61263c, minmulti=0x7ffc8c612638)\n at /home/andres/src/postgresql/src/include/access/tableam.h:1596\n#1 0x000055b1901e9116 in heap_create (relname=0x7ffc8c612900 \"blarg\", relnamespace=2200, reltablespace=0, relid=3016410, relfilenode=3016410, accessmtd=2, \n tupDesc=0x55b191d2a8c8, relkind=114 'r', relpersistence=112 'p', shared_relation=false, mapped_relation=false, allow_system_table_mods=false, \n relfrozenxid=0x7ffc8c61263c, relminmxid=0x7ffc8c612638) at /home/andres/src/postgresql/src/backend/catalog/heap.c:436\n#2 0x000055b1901eab28 in heap_create_with_catalog (relname=0x7ffc8c612900 \"blarg\", relnamespace=2200, reltablespace=0, relid=3016410, reltypeid=0, \n reloftypeid=0, ownerid=10, accessmtd=2, tupdesc=0x55b191d2a8c8, cooked_constraints=0x0, relkind=114 'r', relpersistence=112 'p', shared_relation=false, \n mapped_relation=false, oncommit=ONCOMMIT_NOOP, reloptions=0, use_user_acl=true, allow_system_table_mods=false, is_internal=false, relrewrite=0, \n typaddress=0x0) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1291\n#3 0x000055b19030002a in DefineRelation (stmt=0x55b191d31478, relkind=114 'r', ownerId=10, typaddress=0x0, \n queryString=0x55b191c35bc0 \"CREATE TABLE blarg(id int8 not null\n\nthen when I do\nALTER TABLE blarg ALTER COLUMN id TYPE int8;\nI see\n#0 table_relation_set_new_filenode (rel=0x7f84c241f2a8, newrnode=0x7f84c241f2a8, persistence=112 'p', freezeXid=0x7ffc8c61275c, minmulti=0x7ffc8c612758)\n at /home/andres/src/postgresql/src/include/access/tableam.h:1596\n#1 0x000055b1901e9116 in heap_create (relname=0x7ffc8c612860 \"pg_temp_3016404\", relnamespace=2200, reltablespace=0, relid=3016407, relfilenode=3016407, \n accessmtd=2, tupDesc=0x7f84c24162a0, relkind=114 'r', relpersistence=112 'p', shared_relation=false, mapped_relation=false, allow_system_table_mods=true, \n relfrozenxid=0x7ffc8c61275c, relminmxid=0x7ffc8c612758) at /home/andres/src/postgresql/src/backend/catalog/heap.c:436\n#2 0x000055b1901eab28 in heap_create_with_catalog (relname=0x7ffc8c612860 \"pg_temp_3016404\", relnamespace=2200, reltablespace=0, relid=3016407, reltypeid=0, \n reloftypeid=0, ownerid=10, accessmtd=2, tupdesc=0x7f84c24162a0, cooked_constraints=0x0, relkind=114 'r', relpersistence=112 'p', shared_relation=false, \n mapped_relation=false, oncommit=ONCOMMIT_NOOP, reloptions=0, use_user_acl=false, allow_system_table_mods=true, is_internal=true, relrewrite=3016404, \n typaddress=0x0) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1291\n\n\n\n> I *was* expecting either a call of set_filenode with a new xact id, or\n> something like that, and with some information so that you can locate the\n> schema change planned (e.g., digging through pg_class and friends), I just\n> don't see that when I add debug output.\n\nYou should. And it'll have the new table \"schema\" associated. E.g. in\nthe above example the new table will have\nrel->rd_att->natts == 1\nrel->rd_att->attrs[0].atttypid == 20 (i.e. int8)\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Mar 2021 16:16:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Table AM and DDLs" }, { "msg_contents": "On Mon, Mar 22, 2021 at 12:16 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-03-03 22:15:18 +0100, Mats Kindahl wrote:\n> > On Tue, Feb 23, 2021 at 2:11 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > Thanks for the answer and sorry about the late reply.\n>\n> Mine is even later ;)\n>\n\n:)\n\nSeems I keep the tradition. :)\n\nThanks a lot for the pointers, I have some comments below both about DROP\nTABLE and ALTER TABLE.\n\n\n> > > I don't think that's quite right. It's not exactly obvious from the\n> > > name, but RelationDropStorage() does not actually drop storage. Instead\n> > > it *schedules* the storage to be dropped upon commit.\n> > >\n> > > The reason for deferring the dropping of table storage is that DDL in\n> > > postgres is transactional. Therefore we cannot remove the storage at\n> the\n> > > moment the DROP TABLE is executed - only when the transaction that\n> > > performed the DDL commits. Therefore just providing you with a callback\n> > > that runs in heap_drop_with_catalog() doesn't really achieve much -\n> > > you'd not have a way to execute the \"actual\" dropping of the relation\n> at\n> > > the later stage.\n> > >\n> >\n> > Yeah, I found the chain (performDeletion -> deleteOneObject -> doDeletion\n> > -> heap_drop_with_catalog) where the delete was just scheduled for\n> deletion\n> > but it appeared like this was the place to actually perform the \"actual\"\n> > delete. Looking closer, I see this was the wrong location. However, the\n> > intention was to get a callback when the \"actual\" delete should happen.\n> > Before that, the blocks are still potentially alive and could be read, so\n> > shouldn't be recycled.\n> >\n> > It seems the right location seems to be in the storage manager\n> (smgr_unlink\n> > in smgr.c), but that does not seem to be extensible, or are there any\n> plans\n> > to make it available so that you can implement something other than just\n> > \"magnetic disk\"?\n>\n> There've been patches to add new types of storage below smgr.c, but not\n> in way that can be done outside of build time. As far as I recall.\n>\n\nI have done some more research and I do not think it is necessary to extend\nthe storage layer. As a matter of fact, I think the patch I suggested is\nthe right approach: let me elaborate on why.\n\nLet's look at how the implementation works with the heap access method (the\nfile heapam_handler.c) and for this case let's use CREATE TABLE, DROP\nTABLE, and TRUNCATE TABLE (last one since that is supported in the Table AM\nand hence is a good reference for the comparison).\n\nDisregarding surrounding layers, we have three layers that are important\nhere:\n\n 1. Heap catalog layer (not sure what to call it, but it's the\n src/backend/catalog/heap.c file)\n 2. AM layer (the src/backend/access/heap/heapam_handler.c file)\n 3. Storage layer (the src/backend/catalog/storage.c file) \"code to\n create and destroy physical storage for relations\".\n\nLooking at CREATE TRUNCATE, we have the following calls through these\nlayers:\n\n 1. In the heap catalog layer we have a call of heap_truncate_one_rel\n which calls the table AM layer.\n 2. In the Table AM layer heapam_relation_nontransactional_truncate will\n just call the storage layer to truncate the storage.\n 3. The storage layer gets called through RelationTruncate, which will\n truncate the actual files.\n\nLooking at CREATE TABLE, we have a similar pattern:\n\n 1. In the heap catalog layer heap_create_with_catalog is called, which\n in turn calls heap_create, which will create the actual relcache and also\n call the table AM layer if it is a relation, materialized view, or\n toastvalue.\n 2. In the Table AM layer, heapam_relation_set_new_filenode is called\n which will record the transaction identifiers and call the storage layer to\n create the underlying storage.\n 3. In the storage layer, RelationCreateStorage will create the necessary\n storage, but also register the table for deletion if the transaction is\n aborted.\n\nNote here that the storage layer remembers the table for deletion by saving\nit in pendingDeletes, which is local to the storage layer.\n\nLooking at DROP TABLE, we have a similar pattern, but am missing one step:\n\n 1. In the heap catalog layer the function heap_drop_with_catalog is\n called, which releases the system cache and calls the storage layer to drop\n the relation\n 2. In the storage layer, the function RelationDropStorage is called,\n which will record the table to be dropped in the pendingDeletes\n\nWhen committing (or aborting) the transaction, there are two calls that are\ninteresting, in this order:\n\n 1. CallXactCallbacks which calls registered callbacks\n 2. smgrDoPendingDeletes, which calls the storage layer directly to\n perform the actual deletion, if necessary.\n\nNow, suppose that we want to replace the storage layer with a different\none. It is straightforward to replace it by implementing the Table AM\nmethods above, but we are missing a callback on dropping the table. If we\nhave that, we can record the table-to-be-dropped in a similar manner to how\nthe heap AM does it and register a transaction callback using\nRegisterXactCallback.\n\n\n>\n> > > Before I explain some more: Could you describe in a bit more detail\n> what\n> > > kind of optimization you'd like to make?\n> > >\n> >\n> > This is not really about any optimizations, it more about a good API for\n> > tables and managing storage. If a memory table can be implemented\n> entirely\n> > in the extension and storage managed fully, there is a lot of interesting\n> > potential for various implementations of table backends. For this to\n> work I\n> > think it is necessary to be able to handle schema changes for the backend\n> > storage in addition to scans, inserts, updates, and deletes, but I am not\n> > sure if it is already possible in some way that I haven't discovered or\n> if\n> > I should just try to propose something (making the storage manager API\n> > extensible seems like a good first attempt).\n>\n> As long as you have a compatible definition of what is acceptable \"in\n> place\" ALTER TABLE (e.g. adding new columns, changing between compatible\n> types), and what requires a table rewrite (e.g. an incompatible column\n> type change), I don't see a real problem. Except for the unlink thing\n> above.\n>\n> Any schema change requiring a table rewrite will trigger a new relation\n> to be created, which in turn will involve tableam. After that you'll\n> just get called back to re-insert all the tuples in the original\n> relation.\n>\n> If you want a different definition on what needs a rewrite, good luck,\n> it'll be a heck of a lot more work.\n>\n\nNo, this should work fine.\n\n> > Due to postgres' transactional DDL you cannot really change the storage\n> > > layout of *existing data* when that DDL command is executed - the data\n> > > still needs to be interpretable in case the DDL is rolled back\n> > > (including when crashing).\n> > >\n> >\n> > No, didn't expect this, but some means to see that a schema change is\n> about\n> > to happen.\n>\n> For anything that's not in-place you'll see a new table being created\n> (c.f. ATRewriteTables() calling make_new_heap()). The relfilenode\n> identifying the data (as opposed to the oid, identifying a relation),\n> will then be swapped with the current table's relfilenode via\n> finish_heap_swap().\n>\n>\n> > > Other changes, e.g. changing the type of a column \"sufficiently\", will\n> > > cause a so called table rewrite. Which means that a new relation will\n> be\n> > > created (including a call to relation_set_new_filenode()), then that\n> new\n> > > relation will get all the new data inserted, and then\n> > > pg_class->relfilenode for the \"original\" relation will be changed to\n> the\n> > > \"rewritten\" table (there's two variants of this, once for rewrites due\n> > > to ALTER TABLE and a separate one for VACUUM FULL/CLUSTER).\n>\n> > But that is not visible in the access method interface. If I add debug\n> > output to the memory table, I only see a call to needs_toast_table. If\n> > there were a new call to create a new block and some additional\n> information\n> > about , this would be possible to handle.\n>\n> It should be. If I e.g. do\n>\n> CREATE TABLE blarg(id int4 not null);\n> I get one call to table_relation_set_new_filenode()\n>\n> #0 table_relation_set_new_filenode (rel=0x7f84c2417b70,\n> newrnode=0x7f84c2417b70, persistence=112 'p', freezeXid=0x7ffc8c61263c,\n> minmulti=0x7ffc8c612638)\n> at /home/andres/src/postgresql/src/include/access/tableam.h:1596\n> #1 0x000055b1901e9116 in heap_create (relname=0x7ffc8c612900 \"blarg\",\n> relnamespace=2200, reltablespace=0, relid=3016410, relfilenode=3016410,\n> accessmtd=2,\n> tupDesc=0x55b191d2a8c8, relkind=114 'r', relpersistence=112 'p',\n> shared_relation=false, mapped_relation=false,\n> allow_system_table_mods=false,\n> relfrozenxid=0x7ffc8c61263c, relminmxid=0x7ffc8c612638) at\n> /home/andres/src/postgresql/src/backend/catalog/heap.c:436\n> #2 0x000055b1901eab28 in heap_create_with_catalog (relname=0x7ffc8c612900\n> \"blarg\", relnamespace=2200, reltablespace=0, relid=3016410, reltypeid=0,\n> reloftypeid=0, ownerid=10, accessmtd=2, tupdesc=0x55b191d2a8c8,\n> cooked_constraints=0x0, relkind=114 'r', relpersistence=112 'p',\n> shared_relation=false,\n> mapped_relation=false, oncommit=ONCOMMIT_NOOP, reloptions=0,\n> use_user_acl=true, allow_system_table_mods=false, is_internal=false,\n> relrewrite=0,\n> typaddress=0x0) at\n> /home/andres/src/postgresql/src/backend/catalog/heap.c:1291\n> #3 0x000055b19030002a in DefineRelation (stmt=0x55b191d31478, relkind=114\n> 'r', ownerId=10, typaddress=0x0,\n> queryString=0x55b191c35bc0 \"CREATE TABLE blarg(id int8 not null\n>\n> then when I do\n> ALTER TABLE blarg ALTER COLUMN id TYPE int8;\n> I see\n> #0 table_relation_set_new_filenode (rel=0x7f84c241f2a8,\n> newrnode=0x7f84c241f2a8, persistence=112 'p', freezeXid=0x7ffc8c61275c,\n> minmulti=0x7ffc8c612758)\n> at /home/andres/src/postgresql/src/include/access/tableam.h:1596\n> #1 0x000055b1901e9116 in heap_create (relname=0x7ffc8c612860\n> \"pg_temp_3016404\", relnamespace=2200, reltablespace=0, relid=3016407,\n> relfilenode=3016407,\n> accessmtd=2, tupDesc=0x7f84c24162a0, relkind=114 'r',\n> relpersistence=112 'p', shared_relation=false, mapped_relation=false,\n> allow_system_table_mods=true,\n> relfrozenxid=0x7ffc8c61275c, relminmxid=0x7ffc8c612758) at\n> /home/andres/src/postgresql/src/backend/catalog/heap.c:436\n> #2 0x000055b1901eab28 in heap_create_with_catalog (relname=0x7ffc8c612860\n> \"pg_temp_3016404\", relnamespace=2200, reltablespace=0, relid=3016407,\n> reltypeid=0,\n> reloftypeid=0, ownerid=10, accessmtd=2, tupdesc=0x7f84c24162a0,\n> cooked_constraints=0x0, relkind=114 'r', relpersistence=112 'p',\n> shared_relation=false,\n> mapped_relation=false, oncommit=ONCOMMIT_NOOP, reloptions=0,\n> use_user_acl=false, allow_system_table_mods=true, is_internal=true,\n> relrewrite=3016404,\n> typaddress=0x0) at\n> /home/andres/src/postgresql/src/backend/catalog/heap.c:1291\n>\n>\n>\n> > I *was* expecting either a call of set_filenode with a new xact id, or\n> > something like that, and with some information so that you can locate the\n> > schema change planned (e.g., digging through pg_class and friends), I\n> just\n> > don't see that when I add debug output.\n>\n> You should. And it'll have the new table \"schema\" associated. E.g. in\n> the above example the new table will have\n> rel->rd_att->natts == 1\n> rel->rd_att->attrs[0].atttypid == 20 (i.e. int8)\n>\n\nI didn't get a callback because I did ADD COLUMN and that works\ndifferently: it does not call set_filenode until you either try to insert\nsomething or run a vacuum. Thanks for the pointers, it helped a lot. I need\nto look over the code a little more.\n\nThanks,\nMats Kindahl\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nOn Mon, Mar 22, 2021 at 12:16 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-03-03 22:15:18 +0100, Mats Kindahl wrote:\n> On Tue, Feb 23, 2021 at 2:11 AM Andres Freund <andres@anarazel.de> wrote:\n> Thanks for the answer and sorry about the late reply.\n\nMine is even later ;):) Seems I keep the tradition. :)Thanks a lot for the pointers, I have some comments below both about DROP TABLE and ALTER TABLE.\n\n> > I don't think that's quite right. It's not exactly obvious from the\n> > name, but RelationDropStorage() does not actually drop storage. Instead\n> > it *schedules* the storage to be dropped upon commit.\n> >\n> > The reason for deferring the dropping of table storage is that DDL in\n> > postgres is transactional. Therefore we cannot remove the storage at the\n> > moment the DROP TABLE is executed - only when the transaction that\n> > performed the DDL commits. Therefore just providing you with a callback\n> > that runs in heap_drop_with_catalog() doesn't really achieve much -\n> > you'd not have a way to execute the \"actual\" dropping of the relation at\n> > the later stage.\n> >\n> \n> Yeah, I found the chain (performDeletion -> deleteOneObject -> doDeletion\n> -> heap_drop_with_catalog) where the delete was just scheduled for deletion\n> but it appeared like this was the place to actually perform the \"actual\"\n> delete. Looking closer, I see this was the wrong location. However, the\n> intention was to get a callback when the \"actual\" delete should happen.\n> Before that, the blocks are still potentially alive and could be read, so\n> shouldn't be recycled.\n> \n> It seems the right location seems to be in the storage manager (smgr_unlink\n> in smgr.c), but that does not seem to be extensible, or are there any plans\n> to make it available so that you can implement something other than just\n> \"magnetic disk\"?\n\nThere've been patches to add new types of storage below smgr.c, but not\nin way that can be done outside of build time. As far as I recall.I have done some more research and I do not think it is necessary to extend the storage layer. As a matter of fact, I think the patch I suggested is the right approach: let me elaborate on why.Let's look at how the implementation works with the heap access method (the file heapam_handler.c) and for this case let's use CREATE TABLE, DROP TABLE, and TRUNCATE TABLE (last one since that is supported in the Table AM and hence is a good reference for the comparison).Disregarding surrounding layers, we have three layers that are important here:Heap catalog layer (not sure what to call it, but it's the src/backend/catalog/heap.c file)AM layer (the src/backend/access/heap/heapam_handler.c file)Storage layer (the src/backend/catalog/storage.c file) \"code to create and destroy physical storage for relations\".Looking at CREATE TRUNCATE, we have the following calls through these layers:In the heap catalog layer we have a call of heap_truncate_one_rel which calls the table AM layer.In the Table AM layer heapam_relation_nontransactional_truncate will just call the storage layer to truncate the storage.The storage layer gets called through RelationTruncate, which will truncate the actual files.Looking at CREATE TABLE, we have a similar pattern:In the heap catalog layer heap_create_with_catalog is called, which in turn calls heap_create, which will create the actual relcache and also call the table AM layer if it is a relation, materialized view, or toastvalue.In the Table AM layer, heapam_relation_set_new_filenode is called which will record the transaction identifiers and call the storage layer to create the underlying storage.In the storage layer, RelationCreateStorage will create the necessary storage, but also register the table for deletion if the transaction is aborted.Note here that the storage layer remembers the table for deletion by saving it in pendingDeletes, which is local to the storage layer.Looking at DROP TABLE, we have a similar pattern, but am missing one step:In the heap catalog layer the function heap_drop_with_catalog is called, which releases the system cache and calls the storage layer to drop the relationIn the storage layer, the function RelationDropStorage is called, which will record the table to be dropped in the pendingDeletesWhen committing (or aborting) the transaction, there are two calls that are interesting, in this order:CallXactCallbacks which calls registered callbackssmgrDoPendingDeletes, which calls the storage layer directly to perform the actual deletion, if necessary.Now, suppose that we want to replace the storage layer with a different one. It is straightforward to replace it by implementing the Table AM methods above, but we are missing a callback on dropping the table. If we have that, we can record the table-to-be-dropped in a similar manner to how the heap AM does it and register a transaction callback using RegisterXactCallback.\n\n\n> > Before I explain some more: Could you describe in a bit more detail what\n> > kind of optimization you'd like to make?\n> >\n> \n> This is not really about any optimizations, it more about a good API for\n> tables and managing storage. If a memory table can be implemented entirely\n> in the extension and storage managed fully, there is a lot of interesting\n> potential for various implementations of table backends. For this to work I\n> think it is necessary to be able to handle schema changes for the backend\n> storage in addition to scans, inserts, updates, and deletes, but I am not\n> sure if it is already possible in some way that I haven't discovered or if\n> I should just try to propose something (making the storage manager API\n> extensible seems like a good first attempt).\n\nAs long as you have a compatible definition of what is acceptable \"in\nplace\" ALTER TABLE (e.g. adding new columns, changing between compatible\ntypes), and what requires a table rewrite (e.g. an incompatible column\ntype change), I don't see a real problem. Except for the unlink thing\nabove.\n\nAny schema change requiring a table rewrite will trigger a new relation\nto be created, which in turn will involve tableam. After that you'll\njust get called back to re-insert all the tuples in the original\nrelation.\n\nIf you want a different definition on what needs a rewrite, good luck,\nit'll be a heck of a lot more work.No, this should work fine. \n> > Due to postgres' transactional DDL you cannot really change the storage\n> > layout of *existing data* when that DDL command is executed - the data\n> > still needs to be interpretable in case the DDL is rolled back\n> > (including when crashing).\n> >\n> \n> No, didn't expect this, but some means to see that a schema change is about\n> to happen.\n\nFor anything that's not in-place you'll see a new table being created\n(c.f. ATRewriteTables() calling make_new_heap()). The relfilenode\nidentifying the data (as opposed to the oid, identifying a relation),\nwill then be swapped with the current table's relfilenode via\nfinish_heap_swap().\n\n\n> > Other changes, e.g. changing the type of a column \"sufficiently\", will\n> > cause a so called table rewrite. Which means that a new relation will be\n> > created (including a call to relation_set_new_filenode()), then that new\n> > relation will get all the new data inserted, and then\n> > pg_class->relfilenode for the \"original\" relation will be changed to the\n> > \"rewritten\" table (there's two variants of this, once for rewrites due\n> > to ALTER TABLE and a separate one for VACUUM FULL/CLUSTER).\n\n> But that is not visible in the access method interface. If I add debug\n> output to the memory table, I only see a call to needs_toast_table. If\n> there were a new call to create a new block and some additional information\n> about , this would be possible to handle.\n\nIt should be. If I e.g. do\n\nCREATE TABLE blarg(id int4 not null);\nI get one call to table_relation_set_new_filenode()\n\n#0  table_relation_set_new_filenode (rel=0x7f84c2417b70, newrnode=0x7f84c2417b70, persistence=112 'p', freezeXid=0x7ffc8c61263c, minmulti=0x7ffc8c612638)\n    at /home/andres/src/postgresql/src/include/access/tableam.h:1596\n#1  0x000055b1901e9116 in heap_create (relname=0x7ffc8c612900 \"blarg\", relnamespace=2200, reltablespace=0, relid=3016410, relfilenode=3016410, accessmtd=2, \n    tupDesc=0x55b191d2a8c8, relkind=114 'r', relpersistence=112 'p', shared_relation=false, mapped_relation=false, allow_system_table_mods=false, \n    relfrozenxid=0x7ffc8c61263c, relminmxid=0x7ffc8c612638) at /home/andres/src/postgresql/src/backend/catalog/heap.c:436\n#2  0x000055b1901eab28 in heap_create_with_catalog (relname=0x7ffc8c612900 \"blarg\", relnamespace=2200, reltablespace=0, relid=3016410, reltypeid=0, \n    reloftypeid=0, ownerid=10, accessmtd=2, tupdesc=0x55b191d2a8c8, cooked_constraints=0x0, relkind=114 'r', relpersistence=112 'p', shared_relation=false, \n    mapped_relation=false, oncommit=ONCOMMIT_NOOP, reloptions=0, use_user_acl=true, allow_system_table_mods=false, is_internal=false, relrewrite=0, \n    typaddress=0x0) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1291\n#3  0x000055b19030002a in DefineRelation (stmt=0x55b191d31478, relkind=114 'r', ownerId=10, typaddress=0x0, \n    queryString=0x55b191c35bc0 \"CREATE TABLE blarg(id int8 not null\n\nthen when I do\nALTER TABLE blarg ALTER COLUMN id TYPE int8;\nI see\n#0  table_relation_set_new_filenode (rel=0x7f84c241f2a8, newrnode=0x7f84c241f2a8, persistence=112 'p', freezeXid=0x7ffc8c61275c, minmulti=0x7ffc8c612758)\n    at /home/andres/src/postgresql/src/include/access/tableam.h:1596\n#1  0x000055b1901e9116 in heap_create (relname=0x7ffc8c612860 \"pg_temp_3016404\", relnamespace=2200, reltablespace=0, relid=3016407, relfilenode=3016407, \n    accessmtd=2, tupDesc=0x7f84c24162a0, relkind=114 'r', relpersistence=112 'p', shared_relation=false, mapped_relation=false, allow_system_table_mods=true, \n    relfrozenxid=0x7ffc8c61275c, relminmxid=0x7ffc8c612758) at /home/andres/src/postgresql/src/backend/catalog/heap.c:436\n#2  0x000055b1901eab28 in heap_create_with_catalog (relname=0x7ffc8c612860 \"pg_temp_3016404\", relnamespace=2200, reltablespace=0, relid=3016407, reltypeid=0, \n    reloftypeid=0, ownerid=10, accessmtd=2, tupdesc=0x7f84c24162a0, cooked_constraints=0x0, relkind=114 'r', relpersistence=112 'p', shared_relation=false, \n    mapped_relation=false, oncommit=ONCOMMIT_NOOP, reloptions=0, use_user_acl=false, allow_system_table_mods=true, is_internal=true, relrewrite=3016404, \n    typaddress=0x0) at /home/andres/src/postgresql/src/backend/catalog/heap.c:1291\n\n\n\n> I *was* expecting either a call of set_filenode with a new xact id, or\n> something like that, and with some information so that you can locate the\n> schema change planned (e.g., digging through pg_class and friends), I just\n> don't see that when I add debug output.\n\nYou should. And it'll have the new table \"schema\" associated. E.g. in\nthe above example the new table will have\nrel->rd_att->natts == 1\nrel->rd_att->attrs[0].atttypid == 20 (i.e. int8)I didn't get a callback because I did ADD COLUMN and that works differently: it does not call set_filenode until you either try to insert something or run a vacuum. Thanks for the pointers, it helped a lot. I need to look over the code a little more. Thanks,Mats Kindahl\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 5 Apr 2021 21:57:12 +0200", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "On 05.04.2021 22:57, Mats Kindahl wrote:\n> Now, suppose that we want to replace the storage layer with a \n> different one. It is straightforward to replace it by implementing the \n> Table AM methods above, but we are missing a callback on dropping the \n> table. If we have that, we can record the table-to-be-dropped in a \n> similar manner to how the heap AM does it and register a transaction \n> callback using RegisterXactCallback.\n\nThis explanation makes sense, and the suggested patch makes it easier to \nreplace the storage layer with a different one.\n\nSome other places might become problematic if we're trying to implement \nfully memory-based tables. For example, the heap_create_with_catalog -> \nGetNewRelFilenode -> access() call that directly checks the existence of \na file bypassing the smgr layer. But I think that adding a symmetric \ncallback to the tableam layer can be a good start for further experiments.\n\nSome nitpicks:\n\n+\t/*\n+\t * This callback needs to remove all associations with the relation `rel`\n+\t * since the relation is being dropped.\n+\t *\n+\t * See also table_relation_reset_filenode().\n+\t */\n\n\"Remove all associations\" sounds vague, maybe something like \"schedule \nthe relation files to be deleted at transaction commit\"?\n\n\n+\tvoid (*relation_reset_filenode) (Relation rel);\n\nThis line uses spaces instead of tabs.\n\n\nFor the reference, there is a recent patch that makes the smgr layer \nitself pluggable: \nhttps://www.postgresql.org/message-id/flat/1dc080496f58ce5375778baed0c0fbcc%40postgrespro.ru#502a1278ad8fce6ae85c08b4806c2289\n\n\n--\nAlexander Kuzmenkov\nhttps://www.timescale.com/\n\n\n\n\n\n\nOn 05.04.2021 22:57, Mats Kindahl\n wrote:\n\n\n\n\nNow, suppose that we want to replace the storage layer with\n a different one. It is straightforward to replace it by\n implementing the Table AM methods above, but we are missing a\n callback on dropping the table. If we have that, we can record\n the table-to-be-dropped in a similar manner to how the heap AM\n does it and register a transaction callback using\n RegisterXactCallback.\n\n\n\nThis explanation makes sense, and the suggested patch makes it\n easier to replace the storage layer with a different one.\nSome other places might become problematic if we're trying to\n implement fully memory-based tables. For example, the\n heap_create_with_catalog -> GetNewRelFilenode -> access()\n call that directly checks the existence of a file bypassing the\n smgr layer. But I think that adding a symmetric callback to the\n tableam layer can be a good start for further experiments.\nSome nitpicks:\n+\t/*\n+\t * This callback needs to remove all associations with the relation `rel`\n+\t * since the relation is being dropped.\n+\t *\n+\t * See also table_relation_reset_filenode().\n+\t */\n\"Remove all associations\" sounds vague, maybe something like\n \"schedule the relation files to be deleted at transaction commit\"?\n\n\n+\tvoid (*relation_reset_filenode) (Relation rel);\n\nThis line uses spaces instead of tabs.\n\n\nFor the reference, there is a recent patch that makes the smgr\n layer itself pluggable:\nhttps://www.postgresql.org/message-id/flat/1dc080496f58ce5375778baed0c0fbcc%40postgrespro.ru#502a1278ad8fce6ae85c08b4806c2289\n\n\n --\n Alexander Kuzmenkov\nhttps://www.timescale.com/", "msg_date": "Mon, 27 Sep 2021 14:18:00 +0300", "msg_from": "Alexander Kuzmenkov <akuzmenkov@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "Hi hackers,\n\n> As a matter of fact, I think the patch I suggested is the right approach:\n> let me elaborate on why.\n> [...]\n> It is straightforward to replace it by implementing the Table AM methods\n> above, but we are missing a callback on dropping the table. If we have that,\n> we can record the table-to-be-dropped in a similar manner to how the heap AM\n> does it and register a transaction callback using RegisterXactCallback.\n\nSince no one objected in 5 months, I assume Mats made a good point. At least,\npersonally, I can't argue.\n\nThe patch looks good to me except for the fact that comments seem to be\ninaccurate in light of the discussion. The corrected patch is attached.\nI'm going to mark it as \"Ready for Committer\" unless anyone objects.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 27 Sep 2021 14:59:22 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "Hi hackers,\n\n> I'm going to mark it as \"Ready for Committer\" unless anyone objects.\n\nI updated the status of the patch.\n\nTo clarify, Alexander and I replied almost at the same time. The\ndrawbacks noted by Alexander are fixed in the v2 version of the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 27 Oct 2021 16:08:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "On 27/09/2021 14:59, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n>> As a matter of fact, I think the patch I suggested is the right approach:\n>> let me elaborate on why.\n>> [...]\n>> It is straightforward to replace it by implementing the Table AM methods\n>> above, but we are missing a callback on dropping the table. If we have that,\n>> we can record the table-to-be-dropped in a similar manner to how the heap AM\n>> does it and register a transaction callback using RegisterXactCallback.\n> \n> Since no one objected in 5 months, I assume Mats made a good point. At least,\n> personally, I can't argue.\n\nI agree that having a table AM callback at relation drop would make it \nmore consistent with creating and truncating a relation. Then again, the \nindexam API doesn't have a drop-callback either.\n\nBut what can you actually do in the callback? WAL replay of dropping the \nstorage needs to work without running any AM-specific code. It happens \nas part of replaying a commit record. So whatever action you do in the \ncallback will not be executed at WAL replay. Also, because the callback \nmerely *schedules* things to happen at commit, it cannot generate \nseparate WAL records about dropping resources either.\n\nMats's in-memory table is an interesting example. I guess you don't even \ntry WAL-logging that, so it's OK that nothing happens at WAL replay. As \nyou said, the callback to schedule deletion of the shared memory block \nand use an end-of-xact callback to perform the deletion. You're \nbasically re-inventing a pending-deletes mechanism similar to smgr's.\n\nI think you could actually piggyback on smgr's pending-deletions \nmechanism instead of re-inventing it. In the callback, you can call \nsmgrGetPendingDeletes(), and drop the shared memory segment for any \nrelation in that list.\n\n- Heikki\n\n\n", "msg_date": "Wed, 16 Feb 2022 11:07:15 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "On Wed, Feb 16, 2022 at 10:07 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 27/09/2021 14:59, Aleksander Alekseev wrote:\n> > Hi hackers,\n> >\n> >> As a matter of fact, I think the patch I suggested is the right\n> approach:\n> >> let me elaborate on why.\n> >> [...]\n> >> It is straightforward to replace it by implementing the Table AM methods\n> >> above, but we are missing a callback on dropping the table. If we have\n> that,\n> >> we can record the table-to-be-dropped in a similar manner to how the\n> heap AM\n> >> does it and register a transaction callback using RegisterXactCallback.\n> >\n> > Since no one objected in 5 months, I assume Mats made a good point. At\n> least,\n> > personally, I can't argue.\n>\n> I agree that having a table AM callback at relation drop would make it\n> more consistent with creating and truncating a relation. Then again, the\n> indexam API doesn't have a drop-callback either.\n>\n\nThat is actually a good point. We could add an on-drop-callback for the\nindexam as well, if we add it for tableam. Haven't looked at that though,\nso if you think it should be added, I can investigate.\n\n\n> But what can you actually do in the callback? WAL replay of dropping the\n> storage needs to work without running any AM-specific code. It happens\n> as part of replaying a commit record. So whatever action you do in the\n> callback will not be executed at WAL replay.\n\nAlso, because the callback\n> merely *schedules* things to happen at commit, it cannot generate\n> separate WAL records about dropping resources either.\n>\n\nDigressing slightly: This is actually a drawback and I have been looking\nfor a way to do things on recovery.\n\nJust to have an example: if you want to have an in-memory table that is\ndistributed the problem will be that even though the table is empty, it\nmight actually be of interest to fetch the contents from another node on\nrecovery, but this is currently not possible since it is assumed that all\nactions are present in the WAL and a memory table would not use the WAL for\nthis since one of the goals is to make it fast and avoid writes to disk.\n\nThis might be possible to piggyback on the first select or insert done on\nthe table, but it makes the system more complicated since it is easy to\nmiss one place where you need to do this fetching. If you always do this on\nrecovery it is a single place that you need to add and the system will\nafter that be in a predictable state.\n\nIn addition, if it were to be added to the first access of the table, it\nwould add execution time to this first operation, but most users would\nassume that all such work is done at recovery and that the database is\n\"warm\" after recovery.\n\nHowever, this is slightly outside the discussion for this proposed change,\nso we can ignore it for now.\n\n\n>\n> Mats's in-memory table is an interesting example. I guess you don't even\n> try WAL-logging that, so it's OK that nothing happens at WAL replay. As\n> you said, the callback to schedule deletion of the shared memory block\n> and use an end-of-xact callback to perform the deletion. You're\n> basically re-inventing a pending-deletes mechanism similar to smgr's.\n>\n\n> I think you could actually piggyback on smgr's pending-deletions\n> mechanism instead of re-inventing it. In the callback, you can call\n> smgrGetPendingDeletes(), and drop the shared memory segment for any\n> relation in that list.\n>\n\nHmm... it is a good point that smgrGetPendingDeletes() can be used in the\ncommit callback for something as simple as a memory table when all the data\nis local. It should also work well with a distributed optimistic storage\nengine when you certify the transaction at commit time. What will happen\nthen is that the actual \"drop command\" will be sent out at commit time\nrather than when the command is actually executed.\n\nMy main interest is, however, to have an API that works for all kinds of\nstorage engines, not just limited to local storage but also supporting\ndistributed storage systems and also being able to interact with existing\nimplementations. There are a few reasons why getting a notification when\nthe table is dropped rather than when the commit is done is beneficial.\n\n 1. In a distributed storage engine you might want to distribute changes\n speculatively when they happen so that the commit, once it occurs, will be\n fast. By sending out the action early, you allow work to start\n independently of the current machine, which will improve parallelization.\n 2. In a distributed storage engine or order of the statements received\n remotely make a difference. For example, if you want to use a distributed\n locking scheme for your distributed storage, you are currently forced to\n implement an optimistic scheme while in reality you might want to\n distribute the drop and lock the table exclusively on all remote nodes\n (this is already what PostgreSQL does, locking the table on a drop). I do\n realize that distributed transactions are not that simple and there are\n other problems associated with this, but this would still introduce an\n unnecessary restriction on what you can do.\n 3. A problem with optimistic protocols in general is that they drop in\n performance when you have a lot of writes. It is simply the case that other\n smaller transactions will constantly force a long-running transaction to be\n aborted. This also means that there is a risk that a transaction that drops\n a table will have to be aborted out of necessity since other transactions\n are updating the table. In a distributed system there will be more of\n those, so the odds of aborting \"more important\" transactions (in the sense\n of needing stronger locks) is higher.\n 4. A smaller issue is that right now the storage manager (smgr) and the\n transaction system are quite tightly coupled, which makes it more difficult\n to make the storage system \"pluggable\". I think that not requiring the use\n of pendingDeletes would move one step in the direction of removing this\n coupling, but I am not entirely sure here.\n\nIt is likely that many of these problems can be worked around by placing\nrestrictions on how DDLs can be used in transactions, but that would create\nunnecessary restrictions for the end-user. It might also be possible to\nfind implementation workarounds by placing code at strategic points in the\nimplementation, but this is error prone and the risk of making an error is\nhigher.\n\nBest wishes,\nMats Kindahl\n\n\n>\n> - Heikki\n>\n\nOn Wed, Feb 16, 2022 at 10:07 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 27/09/2021 14:59, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n>> As a matter of fact, I think the patch I suggested is the right approach:\n>> let me elaborate on why.\n>> [...]\n>> It is straightforward to replace it by implementing the Table AM methods\n>> above, but we are missing a callback on dropping the table. If we have that,\n>> we can record the table-to-be-dropped in a similar manner to how the heap AM\n>> does it and register a transaction callback using RegisterXactCallback.\n> \n> Since no one objected in 5 months, I assume Mats made a good point. At least,\n> personally, I can't argue.\n\nI agree that having a table AM callback at relation drop would make it \nmore consistent with creating and truncating a relation. Then again, the \nindexam API doesn't have a drop-callback either.That is actually a good point. We could add an on-drop-callback for the indexam as well, if we add it for tableam. Haven't looked at that though, so if you think it should be added, I can investigate. \nBut what can you actually do in the callback? WAL replay of dropping the \nstorage needs to work without running any AM-specific code. It happens \nas part of replaying a commit record. So whatever action you do in the \ncallback will not be executed at WAL replay.Also, because the callback \nmerely *schedules* things to happen at commit, it cannot generate \nseparate WAL records about dropping resources either.Digressing slightly: This is actually a drawback and I have been looking for a way to do things on recovery.Just to have an example: if you want to have an in-memory table that is distributed the problem will be that even though the table is empty, it might actually be of interest to fetch the contents from another node on recovery, but this is currently not possible since it is assumed that all actions are present in the WAL and a memory table would not use the WAL for this since one of the goals is to make it fast and avoid writes to disk.This might be possible to piggyback on the first select or insert done on the table, but it makes the system more complicated since it is easy to miss one place where you need to do this fetching. If you always do this on recovery it is a single place that you need to add and the system will after that be in a predictable state.In addition, if it were to be added to the first access of the table, it would add execution time to this first operation, but most users would assume that all such work is done at recovery and that the database is \"warm\" after recovery.However, this is slightly outside the discussion for this proposed change, so we can ignore it for now.  \n\nMats's in-memory table is an interesting example. I guess you don't even \ntry WAL-logging that, so it's OK that nothing happens at WAL replay. As \nyou said, the callback to schedule deletion of the shared memory block \nand use an end-of-xact callback to perform the deletion. You're \nbasically re-inventing a pending-deletes mechanism similar to smgr's.\n\nI think you could actually piggyback on smgr's pending-deletions \nmechanism instead of re-inventing it. In the callback, you can call \nsmgrGetPendingDeletes(), and drop the shared memory segment for any \nrelation in that list.Hmm... it is a good point that smgrGetPendingDeletes() can be used in the commit callback for something as simple as a memory table when all the data is local. It should also work well with a distributed optimistic storage engine when you certify the transaction at commit time. What will happen then is that the actual \"drop command\" will be sent out at commit time rather than when the command is actually executed.My main interest is, however, to have an API that works for all kinds of \nstorage engines, not just limited to local storage but also \nsupporting distributed storage systems and also being able to interact \nwith existing implementations. There are a few reasons why \ngetting a notification when the table is dropped rather than when the commit is done is beneficial.In a distributed storage engine you might want to distribute changes speculatively when they\nhappen so that the commit, once it occurs, will be fast. By sending out\n the action early, you allow work to start independently of the current \nmachine, which will  improve parallelization.In a distributed storage engine or order of the statements received remotely make a difference. For example, if you want to use a distributed locking scheme for your distributed storage, you are currently forced to implement an optimistic scheme while in reality you might want to distribute the drop and lock the table exclusively on all remote nodes (this is already what PostgreSQL does, locking the table on a drop). I do realize that distributed transactions are not that simple and there are other problems associated with this, but this would still introduce an unnecessary restriction on what you can do.A problem with optimistic protocols in general is that they drop in performance when you have a lot of writes. It is simply the case that other smaller transactions will constantly force a long-running transaction to be aborted. This also means that there is a risk that a transaction that drops a table will have to be aborted out of necessity since other transactions are updating the table. In a distributed system there will be more of those, so the odds of aborting \"more important\" transactions (in the sense of needing stronger locks) is higher.A smaller issue is that right now the storage manager (smgr) and the transaction system are quite tightly coupled, which makes it more difficult to make the storage system \"pluggable\". I think that not requiring the use of pendingDeletes would move one step in the direction of removing this coupling, but I am not entirely sure here.It is likely that many of these problems can be worked around by placing restrictions on how DDLs can be used in transactions, but that would create unnecessary restrictions for the end-user. It might also be possible to find implementation workarounds by placing code at strategic points in the implementation, but this is error prone and the risk of making an error is higher.Best wishes,Mats Kindahl \n\n- Heikki", "msg_date": "Fri, 29 Jul 2022 11:36:13 +0100", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "Hi,\n\nOn 2021-04-05 21:57:12 +0200, Mats Kindahl wrote:\n> 2. In the storage layer, the function RelationDropStorage is called,\n> which will record the table to be dropped in the pendingDeletes\n> \n> When committing (or aborting) the transaction, there are two calls that are\n> interesting, in this order:\n> \n> 1. CallXactCallbacks which calls registered callbacks\n> 2. smgrDoPendingDeletes, which calls the storage layer directly to\n> perform the actual deletion, if necessary.\n> \n> Now, suppose that we want to replace the storage layer with a different\n> one. It is straightforward to replace it by implementing the Table AM\n> methods above, but we are missing a callback on dropping the table. If we\n> have that, we can record the table-to-be-dropped in a similar manner to how\n> the heap AM does it and register a transaction callback using\n> RegisterXactCallback.\n\nI don't think implementing dropping relation data at-commit/rollback using\nxact callbacks can be correct. The dropping needs to be integrated with the\ncommit / abort records, so it is redone during crash recovery - that's not\npossible with xact callbacks.\n\nTo me it still seems fundamentally the wrong direction to implement a \"drop\nrelation callback\" tableam callback.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 1 Aug 2022 16:44:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "Hi,\n\nThe CF entry for this patch doesn't currently apply and there has been a bunch\nof feedback on the approach. Mats, are you actually waiting for further\nfeedback right now?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Oct 2022 09:53:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "On Sun, Oct 02, 2022 at 09:53:01AM -0700, Andres Freund wrote:\n> The CF entry for this patch doesn't currently apply and there has been a bunch\n> of feedback on the approach. Mats, are you actually waiting for further\n> feedback right now?\n\nOkay, for now this has been marked as RwF.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 16:39:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "Hello all,\n\nI think the discussion went a little sideways, so let me recap what I'm\nsuggesting:\n\n 1. I mentioned that there is a missing callback when the filenode is\n unlinked and this is particularly evident when dropping a table.\n 2. It was correctly pointed out to me that an implementor need to ensure\n that dropping a table is transactional.\n 3. I argued that the callback is still correct and outlined how this can\n be handled by a table access method using xact callbacks, if necessary.\n\nI see huge potential in the table access method and would like to do my\npart in helping it in succeeding. I noted that the API is biased in how the\nexisting heap implementation works and is also very focused on\nimplementations of \"local\" storage engines. For more novel architectures\n(for example, various sorts of distributed architectures) and to be easier\nto work with, I think that the API can be improved in a few places. This is\na first step in the direction of making the API both easier to use as well\nas enabling more novel use-cases.\n\nWriting implementations with ease is more about having the right callbacks\nin the right places and providing the right information, but in some cases\nit is just not possible to implement efficient functionality with the\ncurrent interface. I think it can be useful to separate these two kinds of\nenhancements when discussing the API, but I think both are important for\nthe table access methods to be practically usable and to leverage the full\npower of this concept.\n\nOn Tue, Aug 2, 2022 at 1:44 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-04-05 21:57:12 +0200, Mats Kindahl wrote:\n> > 2. In the storage layer, the function RelationDropStorage is called,\n> > which will record the table to be dropped in the pendingDeletes\n> >\n> > When committing (or aborting) the transaction, there are two calls that\n> are\n> > interesting, in this order:\n> >\n> > 1. CallXactCallbacks which calls registered callbacks\n> > 2. smgrDoPendingDeletes, which calls the storage layer directly to\n> > perform the actual deletion, if necessary.\n> >\n> > Now, suppose that we want to replace the storage layer with a different\n> > one. It is straightforward to replace it by implementing the Table AM\n> > methods above, but we are missing a callback on dropping the table. If we\n> > have that, we can record the table-to-be-dropped in a similar manner to\n> how\n> > the heap AM does it and register a transaction callback using\n> > RegisterXactCallback.\n>\n> don't think implementing dropping relation data at-commit/rollback using\n> xact callbacks can be correct. The dropping needs to be integrated with the\n> commit / abort records, so it is redone during crash recovery - that's not\n> possible with xact callbacks.\n>\n\nYes, but this patch is about making the extension aware that a file node is\nbeing unlinked.\n\n\n> To me it still seems fundamentally the wrong direction to implement a \"drop\n> relation callback\" tableam callback.\n>\n\nThis is not really a \"drop table\" callback, it is just the most obvious\ncase where this is missing. So, just to recap the situation as it looks\nright now.\n\nHere is (transactional) truncate table:\n\n 1. Allocate a new file node in the same tablespace as the table\n 2. Add the file node to the list of pending node to delete\n 3. Overwrite the existing file node in the relation with the new one\n 4. Call table_relation_set_new_filenode to tell extension that there is\n a new filenode for the relation\n\nHere is drop table:\n\n 1. Add the existing file node to the list of pending deletes\n 2. Remove the table from the catalogs\n\nFor an extension writer, the disappearance of the old file node is\n\"invisible\" since there is no callback about this, but it is very clear\nwhen a new file node is allocated. In addition to being inconsistent, it\nadds an extra burden on the extension writer. To notice that a file node\nhas been unlinked you can register a transaction handler and investigate\nthe pending list at commit or abort time. Even though possible, there are\ntwo problems with this: 1) the table access method is notified \"late\" in\nthe transaction that the file node is going away, and 2) it is\nunnecessarily complicated to register a transaction handler only for\ninspecting this.\n\nTelling the access method that the filenode is unlinked by adding a\ncallback is by far the best solution since it does not affect existing\nextensions and will give the table access methods opportunities to act on\nit immediately.\n\nI have attached an updated patch that changes the names of the callbacks\nsince there was a name change. I had also missed the case of unlinking a\nfile node when tables were truncated, so I have added a callback for this\nas well.\n\nBest wishes,\nMats Kindahl\n\n\n> Greetings,\n>\n> Andres Freund\n>", "msg_date": "Wed, 16 Nov 2022 14:49:59 +0100", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" }, { "msg_contents": "Hi,\n\nOn 2022-11-16 14:49:59 +0100, Mats Kindahl wrote:\n> I think the discussion went a little sideways, so let me recap what I'm\n> suggesting:\n> \n> 1. I mentioned that there is a missing callback when the filenode is\n> unlinked and this is particularly evident when dropping a table.\n> 2. It was correctly pointed out to me that an implementor need to ensure\n> that dropping a table is transactional.\n> 3. I argued that the callback is still correct and outlined how this can\n> be handled by a table access method using xact callbacks, if necessary.\n\nI still think 3) isn't a solution to 2). The main issue is that xact callbacks\ndon't address that storage has to be removed during redo as well. For that\ndropping storage has to be integrated with commit / abort records.\n\nI don't think custom a custom WAL rmgr addresses this either - it has to be\nintegrated with the commit record, and the record has to be replayed as a whole.\n\n\n> I see huge potential in the table access method and would like to do my\n> part in helping it in succeeding. I noted that the API is biased in how the\n> existing heap implementation works and is also very focused on\n> implementations of \"local\" storage engines. For more novel architectures\n> (for example, various sorts of distributed architectures) and to be easier\n> to work with, I think that the API can be improved in a few places. This is\n> a first step in the direction of making the API both easier to use as well\n> as enabling more novel use-cases.\n\nI agree with that - there's lots more work to be done and the evolution from\nnot having the abstraction clearly shows. Some of the deficiencies are easy to\nfix, but others are there because there's no quick solution to them.\n\n\n> > To me it still seems fundamentally the wrong direction to implement a \"drop\n> > relation callback\" tableam callback.\n> >\n> \n> This is not really a \"drop table\" callback, it is just the most obvious\n> case where this is missing. So, just to recap the situation as it looks\n> right now.\n> \n> Here is (transactional) truncate table:\n> \n> 1. Allocate a new file node in the same tablespace as the table\n> 2. Add the file node to the list of pending node to delete\n> 3. Overwrite the existing file node in the relation with the new one\n> 4. Call table_relation_set_new_filenode to tell extension that there is\n> a new filenode for the relation\n> \n> Here is drop table:\n> \n> 1. Add the existing file node to the list of pending deletes\n> 2. Remove the table from the catalogs\n> \n> For an extension writer, the disappearance of the old file node is\n> \"invisible\" since there is no callback about this, but it is very clear\n> when a new file node is allocated. In addition to being inconsistent, it\n> adds an extra burden on the extension writer. To notice that a file node\n> has been unlinked you can register a transaction handler and investigate\n> the pending list at commit or abort time. Even though possible, there are\n> two problems with this: 1) the table access method is notified \"late\" in\n> the transaction that the file node is going away, and 2) it is\n> unnecessarily complicated to register a transaction handler only for\n> inspecting this.\n\nUsing a transaction callback solely also doesn't address the redo issue...\n\n\nI think to make this viable for filenodes that don't look like md.c's, you'd\nhave to add a way to make commit/abort records extensible by custom\nrmgrs. Then the replay of a commit/abort could call the custom rmgr for the\n\"sub-record\" during replay.\n\n\n> Telling the access method that the filenode is unlinked by adding a\n> callback is by far the best solution since it does not affect existing\n> extensions and will give the table access methods opportunities to act on\n> it immediately.\n\nI'm loathe to add a callback that I don't think can be used correctly without\nfurther changes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Nov 2022 10:02:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Table AM and DROP TABLE [ Was: Table AM and DDLs]" } ]
[ { "msg_contents": "Hi, hackers\n\nIn [1] wrote:\n\n> If you don't have lcov or prefer text output over an HTML report, you can also run\n> make coverage\n\n[1] https://www.postgresql.org/docs/13/regress-coverage.html\n\nIt seems the lcov is not a necessary program to run a coverage test.\n\nBut when I configure with --enable-coverage, then error was reported:\n\tchecking for lcov... no\n\tconfigure: error: lcov not found\n\nBecause it's a little difficult to install lcov in offline environment and we can get a \ntext format result by running make coverage. How about change this action, when \nthere is no lcov in system, only a warning message is reported.\n\nThe patch attached. Thought ?\n\nBest regards\nShenhao Wang", "msg_date": "Mon, 22 Feb 2021 08:41:03 +0000", "msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "do coverage test without install lcov" }, { "msg_contents": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com> writes:\n> Because it's a little difficult to install lcov in offline environment and we can get a \n> text format result by running make coverage. How about change this action, when \n> there is no lcov in system, only a warning message is reported.\n\n> The patch attached. Thought ?\n\nThis should use the \"missing\" mechanism that's already used for,\ne.g., flex and bison.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Feb 2021 10:45:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: do coverage test without install lcov" } ]
[ { "msg_contents": "pg_collation_actual_version() -> pg_collation_current_version().\n\nThe new name seems a bit more natural.\n\nDiscussion: https://postgr.es/m/20210117215940.GE8560%40telsasoft.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/9cf184cc0599b6e65e7e5ecd9d91cd42e278bcd8\n\nModified Files\n--------------\ndoc/src/sgml/func.sgml | 8 ++++----\nsrc/backend/commands/collationcmds.c | 2 +-\nsrc/backend/utils/adt/pg_locale.c | 14 +++++++-------\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_proc.dat | 4 ++--\nsrc/test/regress/expected/collate.icu.utf8.out | 6 +++---\nsrc/test/regress/sql/collate.icu.utf8.sql | 6 +++---\n7 files changed, 21 insertions(+), 21 deletions(-)", "msg_date": "Mon, 22 Feb 2021 11:28:52 +0000", "msg_from": "Thomas Munro <tmunro@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: pg_collation_actual_version() ->\n pg_collation_current_version()." }, { "msg_contents": "On 22.02.21 12:28, Thomas Munro wrote:\n> pg_collation_actual_version() -> pg_collation_current_version().\n> \n> The new name seems a bit more natural.\n> \n> Discussion: https://postgr.es/m/20210117215940.GE8560%40telsasoft.com\n\nI don't find where this change was discussed in that thread. I \nspecifically chose that name to indicate, \"not the current version in \nthe database, but the version the OS thinks it should be\". I think the \nrename loses that distinction.\n\n\n", "msg_date": "Tue, 23 Feb 2021 07:03:12 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: pg_collation_actual_version() ->\n pg_collation_current_version()." }, { "msg_contents": "On Tue, Feb 23, 2021 at 7:03 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 22.02.21 12:28, Thomas Munro wrote:\n> > pg_collation_actual_version() -> pg_collation_current_version().\n> >\n> > The new name seems a bit more natural.\n> >\n> > Discussion: https://postgr.es/m/20210117215940.GE8560%40telsasoft.com\n>\n> I don't find where this change was discussed in that thread. I\n> specifically chose that name to indicate, \"not the current version in\n> the database, but the version the OS thinks it should be\". I think the\n> rename loses that distinction.\n\nI understood \"actual\" to be a way of contrasting with\npg_collation.collversion, which we dropped. Without that, the meaning\nof a more typical function name with \"current\" seemed clearer to me,\nand \"actual\" seemed excessively emphatic. There isn't a concept of a\nsingle \"current version in the database\" anymore, there's just the set\nof relevant versions that were current when each index was built.\nHappy to revert the name change if you hate it though, and sorry I\ndidn't CC you on the thread.\n\n\n", "msg_date": "Tue, 23 Feb 2021 20:23:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: pg_collation_actual_version() ->\n pg_collation_current_version()." }, { "msg_contents": "On 23.02.21 08:23, Thomas Munro wrote:\n> On Tue, Feb 23, 2021 at 7:03 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> On 22.02.21 12:28, Thomas Munro wrote:\n>>> pg_collation_actual_version() -> pg_collation_current_version().\n>>>\n>>> The new name seems a bit more natural.\n>>>\n>>> Discussion: https://postgr.es/m/20210117215940.GE8560%40telsasoft.com\n>>\n>> I don't find where this change was discussed in that thread. I\n>> specifically chose that name to indicate, \"not the current version in\n>> the database, but the version the OS thinks it should be\". I think the\n>> rename loses that distinction.\n> \n> I understood \"actual\" to be a way of contrasting with\n> pg_collation.collversion, which we dropped. Without that, the meaning\n> of a more typical function name with \"current\" seemed clearer to me,\n> and \"actual\" seemed excessively emphatic. There isn't a concept of a\n> single \"current version in the database\" anymore, there's just the set\n> of relevant versions that were current when each index was built.\n> Happy to revert the name change if you hate it though, and sorry I\n> didn't CC you on the thread.\n\nSeeing that explanation, I think that's even more of a reason to avoid \nthe name \"current\" and use something strikingly different.\n\nIn any case, this function name has been around for some years now and \nrenaming it just for taste reasons seems unnecessary.\n\n\n", "msg_date": "Thu, 25 Feb 2021 10:49:02 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: pg_collation_actual_version() ->\n pg_collation_current_version()." }, { "msg_contents": "On Thu, Feb 25, 2021 at 10:49 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> Seeing that explanation, I think that's even more of a reason to avoid\n> the name \"current\" and use something strikingly different.\n>\n> In any case, this function name has been around for some years now and\n> renaming it just for taste reasons seems unnecessary.\n\nI guess my unspoken assumption was that anyone using this in a query\nis probably comparing it with collversion and thus already has a query\nthat needs to be rewritten for v14, and therefore it's not a bad time\nto clean up some naming. But that argument is moot if you don't even\nagree that the new name's an improvement, so... reverted.\n\n\n", "msg_date": "Fri, 26 Feb 2021 16:13:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: pg_collation_actual_version() ->\n pg_collation_current_version()." }, { "msg_contents": "On 2021-Feb-26, Thomas Munro wrote:\n\n> On Thu, Feb 25, 2021 at 10:49 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > Seeing that explanation, I think that's even more of a reason to avoid\n> > the name \"current\" and use something strikingly different.\n> >\n> > In any case, this function name has been around for some years now and\n> > renaming it just for taste reasons seems unnecessary.\n> \n> I guess my unspoken assumption was that anyone using this in a query\n> is probably comparing it with collversion and thus already has a query\n> that needs to be rewritten for v14, and therefore it's not a bad time\n> to clean up some naming. But that argument is moot if you don't even\n> agree that the new name's an improvement, so... reverted.\n\n18 months later I find myself translating this message:\n\n#: utils/init/postinit.c:457\nmsgid \"template database \\\"%s\\\" has a collation version, but no actual collation version could be determined\"\n\nand I have absolutely no idea what to translate it to. What does it\n*mean*? I think if it does mean something important, there should be a\n\"translator:\" comment next to it.\n\nHowever, looking at get_collation_actual_version, I think returning NULL\nis unexpected anyway; it appears that in all cases where something\nfailed, it has already reported an error internally. Also, several\ncallers seem to ignore the possibility of it returning NULL. So I\nwonder if we can just turn this ereport() into an elog() and call it a\nday.\n\n\nThe word \"actual\" is very seldom used in catalogued server messages. We\nonly have these few:\n\nmsgid \"template database \\\"%s\\\" has a collation version, but no actual collation version could be determined\"\nmsgid \"could not determine actual type of argument declared %s\"\nmsgid \"RADIUS response from %s has corrupt length: %d (actual length %d)\"\nmsgid \"ShmemIndex entry size is wrong for data structure \\\"%s\\\": expected %zu, actual %zu\"\nmsgid \"could not determine actual enum type\"\nmsgid \"collation \\\"%s\\\" has no actual version, but a version was recorded\"\nmsgid \"could not determine actual result type for function \\\"%s\\\" declared to return type %s\"\nmsgid \"database \\\"%s\\\" has no actual collation version, but a version was recorded\"\n\nand my strategy so far is to translate it to \"real\" (meaning \"true\") or\njust ignore the word altogether, which gives good results IMO. But in\nthis particular it looks like it's a very critical part of the message.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n", "msg_date": "Sun, 4 Sep 2022 09:38:33 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: pg_collation_actual_version() ->\n pg_collation_current_version()." } ]
[ { "msg_contents": "As Jan mentioned in his thread about a pluggable wire protocol [0], AWS is\nworking on a set of extensions for Babelfish. The intention is to not\nnecessarily have it as a single monolithic extension, but be possible for\npeople to use pieces of it as they need when they are migrating to\nPostgreSQL. Some may just need the functions or data types. Others may need\nthe stored procedure language. Many times when enterprises are migrating\ndatabases, they have satellite applications that they may not be able to\nchange or they are on a different schedules than the main application so\nthe database still needs to support some of the old syntax. A common need\nin these situations is the parser.\n\nAttached is a patch to place a hook at the top of the parser to allow for a\npluggable parser. It is modeled after the planner_hook [1]. To test the\nhook, I have also attached a simple proof of concept that wraps the parser\nin a TRY/CATCH block to catch any parse errors. That could potentially help\na class of users who are sensitive to parse errors ending up in the logs\nand leaking PII data or passwords.\n\n-- Jim\n-- Amazon Web Services\n\n[0] -\nhttps://www.postgresql.org/message-id/flat/CAGBW59d5SjLyJLt-jwNv%2BoP6esbD8SCB%3D%3D%3D11WVe5%3DdOHLQ5wQ%40mail.gmail.com\n[1] -\nhttps://www.postgresql.org/message-id/flat/27516.1180053940%40sss.pgh.pa.us", "msg_date": "Mon, 22 Feb 2021 11:20:54 -0500", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": true, "msg_subject": "Parser Hook" }, { "msg_contents": "Hi,\n\nOn 2021-02-22 11:20:54 -0500, Jim Mlodgenski wrote:\n> As Jan mentioned in his thread about a pluggable wire protocol [0], AWS is\n> working on a set of extensions for Babelfish. The intention is to not\n> necessarily have it as a single monolithic extension, but be possible for\n> people to use pieces of it as they need when they are migrating to\n> PostgreSQL. Some may just need the functions or data types. Others may need\n> the stored procedure language. Many times when enterprises are migrating\n> databases, they have satellite applications that they may not be able to\n> change or they are on a different schedules than the main application so\n> the database still needs to support some of the old syntax. A common need\n> in these situations is the parser.\n> \n> Attached is a patch to place a hook at the top of the parser to allow for a\n> pluggable parser. It is modeled after the planner_hook [1]. To test the\n> hook, I have also attached a simple proof of concept that wraps the parser\n> in a TRY/CATCH block to catch any parse errors. That could potentially help\n> a class of users who are sensitive to parse errors ending up in the logs\n> and leaking PII data or passwords.\n\nI don't think these are really comparable. In case of the planner hook\nyou can reuse the normal planner pieces, and just deal with the one part\nyou need to extend. But we have pretty much no infrastructure to use the\nparser in a piecemeal fashion (there's a tiny bit for plpgsql).\n\nWhich in turn means that to effectively use the proposed hook to\n*extend* what postgres accepts, you need to copy the existing parser,\nand hack in your extensions. Which in turn invariably will lead to\ncomplaints about parser changes / breakages the community will get\ncomplaints about in minor releases etc.\n\nI think the cost incurred for providing a hook that only allows\nextensions to replace the parser with a modified copy of ours will be\nhigher than the gain. Note that I'm not saying that I'm against\nextending the parser, or hooks - just that I don't think just adding the\nhook is a step worth doing on its own.\n\nImo a functional approach would really need to do the work to allow to\nextend & reuse the parser in a piecemeal fashion and *then* add a hook.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Feb 2021 12:52:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Feb 22, 2021 at 3:52 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-02-22 11:20:54 -0500, Jim Mlodgenski wrote:\n> > As Jan mentioned in his thread about a pluggable wire protocol [0], AWS\n> is\n> > working on a set of extensions for Babelfish. The intention is to not\n> > necessarily have it as a single monolithic extension, but be possible for\n> > people to use pieces of it as they need when they are migrating to\n> > PostgreSQL. Some may just need the functions or data types. Others may\n> need\n> > the stored procedure language. Many times when enterprises are migrating\n> > databases, they have satellite applications that they may not be able to\n> > change or they are on a different schedules than the main application so\n> > the database still needs to support some of the old syntax. A common need\n> > in these situations is the parser.\n> >\n> > Attached is a patch to place a hook at the top of the parser to allow\n> for a\n> > pluggable parser. It is modeled after the planner_hook [1]. To test the\n> > hook, I have also attached a simple proof of concept that wraps the\n> parser\n> > in a TRY/CATCH block to catch any parse errors. That could potentially\n> help\n> > a class of users who are sensitive to parse errors ending up in the logs\n> > and leaking PII data or passwords.\n>\n> I don't think these are really comparable. In case of the planner hook\n> you can reuse the normal planner pieces, and just deal with the one part\n> you need to extend. But we have pretty much no infrastructure to use the\n> parser in a piecemeal fashion (there's a tiny bit for plpgsql).\n>\n> Which in turn means that to effectively use the proposed hook to\n> *extend* what postgres accepts, you need to copy the existing parser,\n> and hack in your extensions. Which in turn invariably will lead to\n> complaints about parser changes / breakages the community will get\n> complaints about in minor releases etc.\n>\n\nGoing deeper on this, I created another POC as an example. Yes, having a\nhook at the top of the parser does mean an extension needs to copy the\nexisting grammar and modify it. Without a total redesign of how the grammar\nis handled, I'm not seeing how else this could be accomplished. The example\nI have is adding a CREATE JOB command that a scheduler may use. The amount\nof effort needed for an extension maintainer doesn't appear to be that\nonerous. Its not ideal having to copy and patch gram.y, but certainly\ndoable for someone wanting to extend the parser. I also extended the patch\nto add another hook in parse_expr.c to see what we would need to add\nanother keyword and have it call a function like SYSDATE. That appears to\nbe a lot of work to get all of the potentail hook points that an extension\nmay want to add and there may not be that many usecases worth the effort.\n\n\n\n> I think the cost incurred for providing a hook that only allows\n> extensions to replace the parser with a modified copy of ours will be\n> higher than the gain. Note that I'm not saying that I'm against\n> extending the parser, or hooks - just that I don't think just adding the\n> hook is a step worth doing on its own.\n>\n>\nHowever we would want to modify the parser to allow it to be more plugable\nin the future, we would very likely need to have a hook at the top of the\nparser to intiailize things like keywords. Having a hook at the top of the\nparser along with the existing ProcessUtility_hook allows extension to add\ntheir own utility commands if they wish. I would image that covers many\nexisting use cases for extensions today.\n\n\n> Imo a functional approach would really need to do the work to allow to\n> extend & reuse the parser in a piecemeal fashion and *then* add a hook.\n>\n> Greetings,\n>\n> Andres Freund\n>", "msg_date": "Mon, 15 Mar 2021 11:48:58 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Mar 15, 2021 at 11:48:58AM -0400, Jim Mlodgenski wrote:\n> \n> Going deeper on this, I created another POC as an example. Yes, having a\n> hook at the top of the parser does mean an extension needs to copy the\n> existing grammar and modify it. Without a total redesign of how the grammar\n> is handled, I'm not seeing how else this could be accomplished. The example\n> I have is adding a CREATE JOB command that a scheduler may use. The amount\n> of effort needed for an extension maintainer doesn't appear to be that\n> onerous. Its not ideal having to copy and patch gram.y, but certainly\n> doable for someone wanting to extend the parser.\n\nAFAIK nothing in bison prevents you from silently ignoring unhandled grammar\nrather than erroring out. So you could have a parser hook called first, and\nif no valid command was recognized fall back on the original parser. I'm not\nsaying that it's a good idea or will be performant (although the added grammar\nwill likely be very small, so it may not be that bad), but you could definitely\navoid the need to duplicate the whole grammar in each and every extension, and\nallow multiple extensions extending the grammar.\n\nThat won't reduce the difficulty of producing a correct parse tree if you want\nto implement some syntactic sugar for already handled DML though.\n\n> However we would want to modify the parser to allow it to be more plugable\n> in the future, we would very likely need to have a hook at the top of the\n> parser to intiailize things like keywords. Having a hook at the top of the\n> parser along with the existing ProcessUtility_hook allows extension to add\n> their own utility commands if they wish. I would image that covers many\n> existing use cases for extensions today.\n\nWhat happens if multiple extensions want to add their own new grammar? There\nwill at least be possible conflicts with the additional node tags.\n\nAlso, I'm not sure that many extensions would really benefit from custom\nutility command, as you can already do pretty much anything you want using SQL\nfunctions. For instance it would be nice for hypopg to be able to support\n\nCREATE HYPOTHETICAL INDEX ...\n\nrather than\n\nSELECT hypopg_create_index('CREATE INDEX...')\n\nBut really the only benefit would be autocompletion, which still wouldn't be\npossible as psql autocompletion won't be extended. And even if it somehow was,\nI wouldn't expect all psql clients to be setup as needed.\n\n\n", "msg_date": "Tue, 16 Mar 2021 00:43:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Mar 15, 2021, at 16:48, Jim Mlodgenski wrote:\n> The example I have is adding a CREATE JOB command that a scheduler may use. \n\nThis CREATE JOB thing sounds interesting.\n\nAre you working on adding the ability to schedule SQL-commands to run in the background,\nsimilar to cronjob and/or adding ampersand (\"&\") to a command in the terminal?\n\nI couldn't figure it out by reading the patch.\nI noted the \"insert into extended_parser.jobs\" query,\nwhich to me sounds like the job would be some kind of parsing job,\nbut that seems strange.\n\n/Joel\nOn Mon, Mar 15, 2021, at 16:48, Jim Mlodgenski wrote:The example I have is adding a CREATE JOB command that a scheduler may use. This CREATE JOB thing sounds interesting.Are you working on adding the ability to schedule SQL-commands to run in the background,similar to cronjob and/or adding ampersand (\"&\") to a command in the terminal?I couldn't figure it out by reading the patch.I noted the \"insert into extended_parser.jobs\" query,which to me sounds like the job would be some kind of parsing job,but that seems strange./Joel", "msg_date": "Mon, 15 Mar 2021 17:58:34 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Mar 15, 2021 at 12:43 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Mon, Mar 15, 2021 at 11:48:58AM -0400, Jim Mlodgenski wrote:\n> >\n> > Going deeper on this, I created another POC as an example. Yes, having a\n> > hook at the top of the parser does mean an extension needs to copy the\n> > existing grammar and modify it. Without a total redesign of how the\n> grammar\n> > is handled, I'm not seeing how else this could be accomplished. The\n> example\n> > I have is adding a CREATE JOB command that a scheduler may use. The\n> amount\n> > of effort needed for an extension maintainer doesn't appear to be that\n> > onerous. Its not ideal having to copy and patch gram.y, but certainly\n> > doable for someone wanting to extend the parser.\n>\n> AFAIK nothing in bison prevents you from silently ignoring unhandled\n> grammar\n> rather than erroring out. So you could have a parser hook called first,\n> and\n> if no valid command was recognized fall back on the original parser. I'm\n> not\n> saying that it's a good idea or will be performant (although the added\n> grammar\n> will likely be very small, so it may not be that bad), but you could\n> definitely\n> avoid the need to duplicate the whole grammar in each and every extension,\n> and\n> allow multiple extensions extending the grammar.\n>\n>\nThat's a good point. That does simplify it\n\n\n> That won't reduce the difficulty of producing a correct parse tree if you\n> want\n> to implement some syntactic sugar for already handled DML though.\n>\n\nComplex DML like Oracle's outer join syntax is tricky no matter which way\nyou slice it.\n\n\n> > However we would want to modify the parser to allow it to be more\n> plugable\n> > in the future, we would very likely need to have a hook at the top of the\n> > parser to intiailize things like keywords. Having a hook at the top of\n> the\n> > parser along with the existing ProcessUtility_hook allows extension to\n> add\n> > their own utility commands if they wish. I would image that covers many\n> > existing use cases for extensions today.\n>\n> What happens if multiple extensions want to add their own new grammar?\n> There\n> will at least be possible conflicts with the additional node tags.\n>\n\nThe extensions would need to play nice with one another like they do with\nother hooks and properly call the previous hook.\n\n\n> Also, I'm not sure that many extensions would really benefit from custom\n> utility command, as you can already do pretty much anything you want using\n> SQL\n> functions. For instance it would be nice for hypopg to be able to support\n>\n> CREATE HYPOTHETICAL INDEX ...\n>\n> rather than\n>\n> SELECT hypopg_create_index('CREATE INDEX...')\n>\n> But really the only benefit would be autocompletion, which still wouldn't\n> be\n> possible as psql autocompletion won't be extended. And even if it somehow\n> was,\n> I wouldn't expect all psql clients to be setup as needed.\n>\n\nHaving the functionality exposed through DDL gives it a more native feel to\nit for users and for some more likely use the exentions.\n\nOn Mon, Mar 15, 2021 at 12:43 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Mon, Mar 15, 2021 at 11:48:58AM -0400, Jim Mlodgenski wrote:\n> \n> Going deeper on this, I created another POC as an example. Yes, having a\n> hook at the top of the parser does mean an extension needs to copy the\n> existing grammar and modify it. Without a total redesign of how the grammar\n> is handled, I'm not seeing how else this could be accomplished. The example\n> I have is adding a CREATE JOB command that a scheduler may use. The amount\n> of effort needed for an extension maintainer doesn't appear to be that\n> onerous. Its not ideal having to copy and patch gram.y, but certainly\n> doable for someone wanting to extend the parser.\n\nAFAIK nothing in bison prevents you from silently ignoring unhandled grammar\nrather than erroring out.  So you could have a parser hook called first, and\nif no valid command was recognized fall back on the original parser.  I'm not\nsaying that it's a good idea or will be performant (although the added grammar\nwill likely be very small, so it may not be that bad), but you could definitely\navoid the need to duplicate the whole grammar in each and every extension, and\nallow multiple extensions extending the grammar.\nThat's a good point. That does simplify it \nThat won't reduce the difficulty of producing a correct parse tree if you want\nto implement some syntactic sugar for already handled DML though.Complex DML like Oracle's outer join syntax is tricky no matter which way you slice it. \n\n> However we would want to modify the parser to allow it to be more plugable\n> in the future, we would very likely need to have a hook at the top of the\n> parser to intiailize things like keywords. Having a hook at the top of the\n> parser along with the existing ProcessUtility_hook allows extension to add\n> their own utility commands if they wish. I would image that covers many\n> existing use cases for extensions today.\n\nWhat happens if multiple extensions want to add their own new grammar?  There\nwill at least be possible conflicts with the additional node tags.The extensions would need to play nice with one another like they do with other hooks and properly call the previous hook. \n\nAlso, I'm not sure that many extensions would really benefit from custom\nutility command, as you can already do pretty much anything you want using SQL\nfunctions.  For instance it would be nice for hypopg to be able to support\n\nCREATE HYPOTHETICAL INDEX ...\n\nrather than\n\nSELECT hypopg_create_index('CREATE INDEX...')\n\nBut really the only benefit would be autocompletion, which still wouldn't be\npossible as psql autocompletion won't be extended.  And even if it somehow was,\nI wouldn't expect all psql clients to be setup as needed.Having the functionality exposed through DDL gives it a more native feel to it for users and for some more likely use the exentions.", "msg_date": "Mon, 15 Mar 2021 13:02:20 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Mar 15, 2021 at 12:58 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Mon, Mar 15, 2021, at 16:48, Jim Mlodgenski wrote:\n>\n> The example I have is adding a CREATE JOB command that a scheduler may\n> use.\n>\n>\n> This CREATE JOB thing sounds interesting.\n>\n> Are you working on adding the ability to schedule SQL-commands to run in\n> the background,\n> similar to cronjob and/or adding ampersand (\"&\") to a command in the\n> terminal?\n>\n\nNo, it was just a sample of how the parser could be extended to all an\nextension like pg_cron can use CREATE JOB instead of calling a function\nlike SELECT cron.schedule(...)\n\nOn Mon, Mar 15, 2021 at 12:58 PM Joel Jacobson <joel@compiler.org> wrote:On Mon, Mar 15, 2021, at 16:48, Jim Mlodgenski wrote:The example I have is adding a CREATE JOB command that a scheduler may use. This CREATE JOB thing sounds interesting.Are you working on adding the ability to schedule SQL-commands to run in the background,similar to cronjob and/or adding ampersand (\"&\") to a command in the terminal?No, it was just a sample of how the parser could be extended to all an extension like pg_cron can use CREATE JOB instead of calling a function like SELECT cron.schedule(...)", "msg_date": "Mon, 15 Mar 2021 13:05:25 -0400", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "> Also, I'm not sure that many extensions would really benefit from custom\n> utility command, as you can already do pretty much anything you want using\n> SQL\n> functions. For instance it would be nice for hypopg to be able to support\n>\n> CREATE HYPOTHETICAL INDEX ...\n>\n> rather than\n>\n> SELECT hypopg_create_index('CREATE INDEX...')\n>\n> But really the only benefit would be autocompletion, which still wouldn't\n> be\n> possible as psql autocompletion won't be extended. And even if it somehow\n> was,\n> I wouldn't expect all psql clients to be setup as needed.\n>\n\nThe extending parser can be interesting for two cases\n\na) compatibility with other databases\n\nb) experimental supports of some features standard (current or future)\n\nc) some experiments - using\n\nCREATE PIPE xxx(xxx), or CREATE HYPERCUBE xxx (xxx) is more readable more\nSQLish (more natural syntax) than\n\nSELECT create_pipe('name', 'a1', 'int', ...) or SELECT ext('CREATE PIPE ...)\n\nPossibility to work with a parser is one main reason for forking postgres.\nLot of interestings projects fail on the cost of maintaining their own fork.\n\nMaybe a good enough possibility is the possibility to inject an own parser\ncalled before Postgres parser. Then it can do a transformation from \"CREATE\nPIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if\nreturned content is string for reparsing or already prepared AST.\n\nIt can be very interesting feature.\n\nPavel\n\n\nAlso, I'm not sure that many extensions would really benefit from custom\nutility command, as you can already do pretty much anything you want using SQL\nfunctions.  For instance it would be nice for hypopg to be able to support\n\nCREATE HYPOTHETICAL INDEX ...\n\nrather than\n\nSELECT hypopg_create_index('CREATE INDEX...')\n\nBut really the only benefit would be autocompletion, which still wouldn't be\npossible as psql autocompletion won't be extended.  And even if it somehow was,\nI wouldn't expect all psql clients to be setup as needed.The extending parser can be interesting for two casesa) compatibility with other databasesb) experimental supports of some features standard (current or future)c) some experiments - usingCREATE PIPE xxx(xxx), or CREATE HYPERCUBE xxx (xxx) is more readable more SQLish (more natural syntax) thanSELECT create_pipe('name', 'a1', 'int', ...) or SELECT ext('CREATE PIPE ...)Possibility to work with a parser is one main reason for forking postgres. Lot of interestings projects fail on the cost of maintaining their own fork.Maybe a good enough possibility is the possibility to inject an own parser called before Postgres parser. Then it can do a transformation from \"CREATE PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if returned content is string for reparsing or already prepared AST.It can be very interesting feature.Pavel", "msg_date": "Mon, 15 Mar 2021 18:05:52 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Mar 15, 2021 at 06:05:52PM +0100, Pavel Stehule wrote:\n> \n> Possibility to work with a parser is one main reason for forking postgres.\n> Lot of interestings projects fail on the cost of maintaining their own fork.\n> \n> Maybe a good enough possibility is the possibility to inject an own parser\n> called before Postgres parser. Then it can do a transformation from \"CREATE\n> PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if\n> returned content is string for reparsing or already prepared AST.\n\nHaving a hook that returns a reformatted query string would definitely be\neasier to write compared to generating an AST, but the overhead of parsing the\nquery twice plus deparsing it will probably make that approach way too\nexpensive in many usecases, so we shouldn't go that way.\n\n\n", "msg_date": "Tue, 16 Mar 2021 01:18:57 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "po 15. 3. 2021 v 18:18 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Mon, Mar 15, 2021 at 06:05:52PM +0100, Pavel Stehule wrote:\n> >\n> > Possibility to work with a parser is one main reason for forking\n> postgres.\n> > Lot of interestings projects fail on the cost of maintaining their own\n> fork.\n> >\n> > Maybe a good enough possibility is the possibility to inject an own\n> parser\n> > called before Postgres parser. Then it can do a transformation from\n> \"CREATE\n> > PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if\n> > returned content is string for reparsing or already prepared AST.\n>\n> Having a hook that returns a reformatted query string would definitely be\n> easier to write compared to generating an AST, but the overhead of parsing\n> the\n> query twice plus deparsing it will probably make that approach way too\n> expensive in many usecases, so we shouldn't go that way.\n>\n\nyes - so it can be nice to have more possibilities.\n\nparsing is expensive - but on today computers, the cost of parsing is low -\nthe optimization is significantly more expensive.\n\nI wrote some patches in this area (all rejected by Tom :)), and a lot of\nwork can be done after parser and before the analysis stage. Probably, the\nparser hook is not good enough, there should be an analysis stage hook too.\n\npo 15. 3. 2021 v 18:18 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Mon, Mar 15, 2021 at 06:05:52PM +0100, Pavel Stehule wrote:\n> \n> Possibility to work with a parser is one main reason for forking postgres.\n> Lot of interestings projects fail on the cost of maintaining their own fork.\n> \n> Maybe a good enough possibility is the possibility to inject an own parser\n> called before Postgres parser. Then it can do a transformation from \"CREATE\n> PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if\n> returned content is string for reparsing or already prepared AST.\n\nHaving a hook that returns a reformatted query string would definitely be\neasier to write compared to generating an AST, but the overhead of parsing the\nquery twice plus deparsing it will probably make that approach way too\nexpensive in many usecases, so we shouldn't go that way.yes - so it can be nice to have more possibilities. parsing is expensive - but on today computers, the cost of parsing is low - the optimization is significantly more expensive.I wrote some patches in this area (all rejected by Tom :)), and a lot of work can be done after parser and before the analysis stage. Probably, the parser hook is not good enough, there should be an analysis stage hook too.", "msg_date": "Mon, 15 Mar 2021 18:41:36 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "On Mon, Mar 15, 2021 at 06:41:36PM +0100, Pavel Stehule wrote:\n> po 15. 3. 2021 v 18:18 odes�latel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> \n> > On Mon, Mar 15, 2021 at 06:05:52PM +0100, Pavel Stehule wrote:\n> > >\n> > > Possibility to work with a parser is one main reason for forking\n> > postgres.\n> > > Lot of interestings projects fail on the cost of maintaining their own\n> > fork.\n> > >\n> > > Maybe a good enough possibility is the possibility to inject an own\n> > parser\n> > > called before Postgres parser. Then it can do a transformation from\n> > \"CREATE\n> > > PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if\n> > > returned content is string for reparsing or already prepared AST.\n> >\n> > Having a hook that returns a reformatted query string would definitely be\n> > easier to write compared to generating an AST, but the overhead of parsing\n> > the\n> > query twice plus deparsing it will probably make that approach way too\n> > expensive in many usecases, so we shouldn't go that way.\n> >\n> \n> yes - so it can be nice to have more possibilities.\n> \n> parsing is expensive - but on today computers, the cost of parsing is low -\n> the optimization is significantly more expensive.\n> \n> I wrote some patches in this area (all rejected by Tom :)), and a lot of\n> work can be done after parser and before the analysis stage. Probably, the\n> parser hook is not good enough, there should be an analysis stage hook too.\n\nIf you need an parse/analyse hook, it means that you're extending the AST, so\nyou probably also need executor support for that right? Or is it only to\nsupport syntactic sugar in the analysis rather than parsing?\n\n\n", "msg_date": "Tue, 16 Mar 2021 01:54:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": "po 15. 3. 2021 v 18:54 odesílatel Julien Rouhaud <rjuju123@gmail.com>\nnapsal:\n\n> On Mon, Mar 15, 2021 at 06:41:36PM +0100, Pavel Stehule wrote:\n> > po 15. 3. 2021 v 18:18 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> > napsal:\n> >\n> > > On Mon, Mar 15, 2021 at 06:05:52PM +0100, Pavel Stehule wrote:\n> > > >\n> > > > Possibility to work with a parser is one main reason for forking\n> > > postgres.\n> > > > Lot of interestings projects fail on the cost of maintaining their\n> own\n> > > fork.\n> > > >\n> > > > Maybe a good enough possibility is the possibility to inject an own\n> > > parser\n> > > > called before Postgres parser. Then it can do a transformation from\n> > > \"CREATE\n> > > > PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch\n> if\n> > > > returned content is string for reparsing or already prepared AST.\n> > >\n> > > Having a hook that returns a reformatted query string would definitely\n> be\n> > > easier to write compared to generating an AST, but the overhead of\n> parsing\n> > > the\n> > > query twice plus deparsing it will probably make that approach way too\n> > > expensive in many usecases, so we shouldn't go that way.\n> > >\n> >\n> > yes - so it can be nice to have more possibilities.\n> >\n> > parsing is expensive - but on today computers, the cost of parsing is\n> low -\n> > the optimization is significantly more expensive.\n> >\n> > I wrote some patches in this area (all rejected by Tom :)), and a lot of\n> > work can be done after parser and before the analysis stage. Probably,\n> the\n> > parser hook is not good enough, there should be an analysis stage hook\n> too.\n>\n> If you need an parse/analyse hook, it means that you're extending the AST,\n> so\n> you probably also need executor support for that right? Or is it only to\n> support syntactic sugar in the analysis rather than parsing?\n>\n\nI think all necessary executor's hooks are available already. On the\nexecutor level, I was able to do all what I wanted.\n\nI miss just preparsing, and postparsing hooks.\n\nRegards\n\nPavel\n\npo 15. 3. 2021 v 18:54 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Mon, Mar 15, 2021 at 06:41:36PM +0100, Pavel Stehule wrote:\n> po 15. 3. 2021 v 18:18 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> \n> > On Mon, Mar 15, 2021 at 06:05:52PM +0100, Pavel Stehule wrote:\n> > >\n> > > Possibility to work with a parser is one main reason for forking\n> > postgres.\n> > > Lot of interestings projects fail on the cost of maintaining their own\n> > fork.\n> > >\n> > > Maybe a good enough possibility is the possibility to inject an own\n> > parser\n> > > called before Postgres parser. Then it can do a transformation from\n> > \"CREATE\n> > > PIPE ...\" to \"SELECT extparse(\"CREATE PIPE()\". There can be a switch if\n> > > returned content is string for reparsing or already prepared AST.\n> >\n> > Having a hook that returns a reformatted query string would definitely be\n> > easier to write compared to generating an AST, but the overhead of parsing\n> > the\n> > query twice plus deparsing it will probably make that approach way too\n> > expensive in many usecases, so we shouldn't go that way.\n> >\n> \n> yes - so it can be nice to have more possibilities.\n> \n> parsing is expensive - but on today computers, the cost of parsing is low -\n> the optimization is significantly more expensive.\n> \n> I wrote some patches in this area (all rejected by Tom :)), and a lot of\n> work can be done after parser and before the analysis stage. Probably, the\n> parser hook is not good enough, there should be an analysis stage hook too.\n\nIf you need an parse/analyse hook, it means that you're extending the AST, so\nyou probably also need executor support for that right?  Or is it only to\nsupport syntactic sugar in the analysis rather than parsing?I think all necessary executor's hooks are available already. On the executor level, I was able to do all what I wanted.I miss just preparsing, and postparsing hooks.RegardsPavel", "msg_date": "Mon, 15 Mar 2021 19:02:39 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" }, { "msg_contents": ">\n>\n>\n> Also, I'm not sure that many extensions would really benefit from custom\n> utility command, as you can already do pretty much anything you want using\n> SQL\n> functions. For instance it would be nice for hypopg to be able to support\n>\n> CREATE HYPOTHETICAL INDEX ...\n>\n> rather than\n>\n> SELECT hypopg_create_index('CREATE INDEX...')\n>\n> But really the only benefit would be autocompletion, which still wouldn't\n> be\n> possible as psql autocompletion won't be extended. And even if it somehow\n> was,\n> I wouldn't expect all psql clients to be setup as needed.\n>\n\n\"technically\" speaking you are correct, usability speaking you are not. We\nran into this discussion previously when dealing with replication. There is\ncertainly a history to calling functions to do what the grammar (from a\nusability perspective) should do and that is not really a good history. It\nis just what we are all used to. Looking at what you wrote above as a DBA\nor even an average developer: CREATE HYPOTHETICAL INDEX makes much more\nsense than the SELECT execution.\n\nJD\n\nP.S. I had to write HYPOTHETICAL 4 times, I kept typing HYPOTECHNICAL :/\n\n\nAlso, I'm not sure that many extensions would really benefit from custom\nutility command, as you can already do pretty much anything you want using SQL\nfunctions.  For instance it would be nice for hypopg to be able to support\n\nCREATE HYPOTHETICAL INDEX ...\n\nrather than\n\nSELECT hypopg_create_index('CREATE INDEX...')\n\nBut really the only benefit would be autocompletion, which still wouldn't be\npossible as psql autocompletion won't be extended.  And even if it somehow was,\nI wouldn't expect all psql clients to be setup as needed.\"technically\" speaking you are correct, usability speaking you are not. We ran into this discussion previously when dealing with replication. There is certainly a history to calling functions to do what the grammar (from a usability perspective) should do and that is not really a good history. It is just what we are all used to. Looking at what you wrote above as a DBA or even an average developer: CREATE HYPOTHETICAL INDEX makes much more sense than the SELECT execution.JDP.S. I had to write HYPOTHETICAL 4 times, I kept typing HYPOTECHNICAL :/", "msg_date": "Mon, 15 Mar 2021 11:39:55 -0700", "msg_from": "Joshua Drake <jd@commandprompt.com>", "msg_from_op": false, "msg_subject": "Re: Parser Hook" } ]